In addition to the SCC), the GWDG hosts a number of other compute clusters. The term HPC hosting describes a flexible offer for institutes which have their own funds for HPC resources and want to use them as efficiently as possible. In close collaboration with the scientists, suitable systems are planned, tendered and operated. The GWDG can be consulted for individual steps, support throughout the entire process or asked to take over certain parts or the entire process.
During operation, different models are possible:
The hardware is to be operated by the GWDG in its HPC environment. For this purpose, the hardware will be integrated into the existing networks. A root access cannot be granted due to the strong integration. The software environment will be provided and maintained by the GWDG. However, installations in the home directory are possible as on the general HPC resources. The third-party hardware equipment may differ from the rest of the systems in the GWDG’s HPC environment. The institute owning the third-party system can choose between 2 operation modes:
Participation in a tender of the GWDG or procurement of a hardware identical to a GWDG cluster. In this model, the same options as in 2 are basically available. Due to the identical hardware, however, the participation can also be converted into “fairshare”. In this case, the hardware is integrated into the normal operation and can also be shared by other users. The fairshare allows the institute owning the third-party system to submit jobs with higher priority, in order to make these jobs start preferentially - until the fairshare has been used up. Due to the increased fairshare, the institute can distribute high computational demand on significantly more resources than the self-procured ones in the short term. On the other hand, there may be waiting times even if no jobs of the institute are computing yet, because all resources in the cluster are already occupied. However, the next free resource is reserved for a job of the institute due to the high priority.
Examples for a complete integration are various working groups of the Max Planck Institute for Multidisciplinary Sciences in Göttingen and the upcoming system of the Campus Institut Dynamics of Biological Networks (CIDBN). A current example of hardware operation is the latest cluster of the Faculty of Chemistry at the University of Göttingen.
The Campus Institute Data Science (CIDAS) is a joint scientific institution on the Göttingen Campus. The involved parties are the University of Göttingen, the University of Applied Sciences and Art (HAWK), the University Medical Centre, the five local Max Planck Institutes, the Academy of Sciences and Humanities, the German Aerospace Centre and the German Primate Centre. CIDAS forms an interface between computer science, statistics, mathematics and the various disciplines. It combines method development with the top international research of the Göttingen Campus. With the help of CIDAS, campus-wide research, teaching and further education at the University, the HAWK and the Göttingen Campus in the field of Data Science will be coordinated and efficiently carried out.
The Campus Institute for Dynamics of Biological Networks (CIDBN) is a cross-faculty institute of the University of Göttingen, the University Medical Centre Göttingen and the Max Planck Institute for Dynamics and Self-Organization. It’s research focuses on the investigation of dynamic processes with the goal to better capture biological information processing and to represent these in computational models. For this purpose, the CIDBN, in collaboration with the GWDG, is setting up a platform for scientific high-performance computing which will be explicitly customized for computational and theory-based research on biological networks. This platform will be a foundation for the further development in the area of computer and data based research in life sciences at the Göttingen campus. The HPC resources of the CIDBN will be hosted completely within the integrated operational concept as part of the SCC.