Application Software & User Support Group

Application Software and User Support Group (OWU) at ICM is dedicated to providing HPC users with both direct help-desk support as well as expertise shared via periodic tutorials, training and Q&A sessions. The former is assisted with integrated software solutions designed to manage research projects, user base, HPC accounting, and mechanisms to automate application deployment across heterogeneous infrastructure. The latter is a continuous effort to widen general computer knowledge with topics ranging from fundamentals of supercomputing, operating systems, and parallel programming to scientific tutorials.
The Group employs specialists with various scientific and technical backgrounds to address the users’ domain-specific requirements. That knowledge enables us to constantly seek challenging projects that present us with new opportunities. This includes both academic projects as well as commercial ventures that require a variety of software licensing options (free or proprietary).
The ICM supercomputing facility changes along with its users and global research trends. OWU makes sure to adapt to this ever-changing environment by organising and actively engaging in the annual HPC User Conference, where the newest technologies, research, and technical issues are discussed and solved.
OWU’s core responsibilities include:
  • HPC User Support
  • Application Software Deployment, Testing and Monitoring
  • Engaging in Domain-Specific Conferences and Research Activities
  • Developing and Maintaining Software Documentation
  • Organising Tutorials and Training Sessions

The documentation is hosted at:

The Resource Allocation System is available at:

ICM computational infrastructure

ICM computational infrastructure includes two data centers located in Warsaw – Pawinskiego and the Technology Centre. Scientific research is mainly carried out on the two largest computer systems – Cray XC40 and Huawei cluster – based on Intel Xeon architecture. The first, named Okeanos, is equipped with the total of over 26 thousand CPU cores (24 cores and 128 GB of RAM per single node). The second, named Topola, provides the total of over 6 thousand CPU cores (28 cores and 64/128 GB of RAM per single node). Additionally, the infrastructure is complemented with auxiliary systems dedicated to both research and development as well as internal projects – most notably a GPU (NVIDIA Volta) cluster and NEC SX-Aurora TSUBASA vector computer.