For using our compute cluster you need a full GWDG account, which most of the employees of the University of Göttingen and the Max Planck Institutes already have. This account is, by default, not activated for the use of the compute resources. To get it activated, amail to the support from the mail address belonging to the account to be activated is sufficient.

Espacially student accounts can not be activated for HPC. If you have a student account or if you are unsure whether you have a full GWDG account, please refer to the full documentation.

As all services, also the usage of our HPC ressources is accounted in a fictious currency, so called Work Units (“Arbeitseinheiten”, AE). For the current pricing, see the Dienstleistungskatalog.

Logging in

Once you gain access, you can login to the login node These nodes are accessible via ssh from the GÖNET. If you come from the internet, the preferred way to gain access to the GÖNET is to use a VPN connection. Alternatively you can first login to From there you can then reach the frontends.

ssh <GWDG username> ssh login-<fas|mdc>

From these login nodes, you will be forwarded to one of the frontend nodes gwdu101, gwdu102 or gwdu103. These frontends are meant for editing, compiling, and interacting with the batch system, but please don’t use them for testing for more than a few minutes, since all users share resources on the frontends and will be impaired in their daily work, if you overuse them. gwdu101 is an AMD based system, while gwdu102 and gwdu103 are Intel based. If your software takes advantage of special CPU dependent features, it is recommended to use the same CPU architecture for compiling as targeted for running your jobs. Using ssh gwdu<101|102|103>, you can connect from any frontend node to a any other frontend node.

Preparing your environment

HPC systems provide software for many different users. Often they even provide different versions of the same software (e.g. compilers). To prevent dependency clashes and similar, the software is provided in so called “modules”, which can be loaded by the user to her needs.

To see all available modules, use module avail. Once you know which modules you need, you can load them with module load < module name>. Necessary environment variables, e.g. LD_LIBRARY_PATH and PATH, are set by the module. With module show <module name> you can see further details of the module, i.e., which environment variables are set. A complete guide to our module system and the installation of new modules can be found here.

All information about the workload manager Slurm can be found in our documentation.