High performance computing
High performance computing (HPC) is the use of computer clusters with especially powerful processors, large memory and storage space, to tackle problems which would be difficult or impossible on single PCs.
Exploit parallel computing - HPC
Parallel computing, breaking problems into smaller parts and processing these simultaneously to tackle otherwise unfeasible work, is now essential in many research areas - from climate modelling and structural mechanics, to materials science and drug discovery. Research that could take years on a single PC may be completed in just hours.
High performance computing (HPC) is a parallel computing route which uses clusters (networks) of specialist computers with powerful processors, large memory and storage space. Research IT's Platforms team provides our HPC solution, called Barkla, for appropriate research at Liverpool. For research requiring efficient communication between task parts, programs must be specifically designed (as established HPC apps) to take advantage of the cluster; otherwise, no significant speed increase may be seen. Alternatively, substantial speed increases are possible by running batches (job arrays) of independent tasks across the cluster, perhaps to explore a problem space.
Large batches of independent jobs (~1000 or more), which each run acceptably on ordinary PCs, are more appropriate for our high-throughput computing (HTC) platform, HTCondor. For further details on Barkla and other platforms at Liverpool, including user guides, work procedures, and substantial technical information, please see our technical documents site (intranet or VPN).
Our team
Research IT's Platforms team provides and develops Liverpool's research computing platforms, including the Barkla cluster and our cloud platforms.
The team studies emerging HPC technologies, consults with other HPC facilities, and works closely with the research community to ensure that platforms meet researchers' needs, while providing training and workshops to ready users for best use of these. The team consults with research groups to advise on funding applications and specialist platforms for their work. Indeed, many of Barkla's nodes have been designed, and added successively, for specific groups here. Furthermore, the team manage and approve researchers' applications for access to national or world-leading HPC supercomputers.
Have questions for our team? Considering working with us? Contact us at hpc-support@liverpool.ac.uk or join our mailing list for the latest news and events.
The Barkla HPC cluster
Liverpool's Barkla platform is a Linux HPC cluster consisting currently of 183 specialist computers (called nodes). A recent significant investment will boost cluster performance and capabilities several-fold, opening doors for new research via:
- 58 compute nodes, each with 168 cores (two AMD EPYC 9634 CPUs), 1536 GB RAM (9 GB/core), and 3.84 TB local NVMe storage.
- 4 general purpose GPU nodes - as compute nodes but with two NVIDIA L40S GPUs.
- 3 deep-learning focused GPU nodes, each with 96 cores (two Intel Xeon Platinum 8468 CPUs), 2048 GB RAM (21 GB/core), four NVIDIA H100 SXM GPUs, and 7.68 TB local NVMe storage.
- network storage includes 2 PB for short- and medium-term work (NFS with backup), and 2 PB of Lustre parallel storage for all tasks, including I/O-intensive work. Nodes connect via a fast 200 Gb/s dual Intel Omni-Path interconnect.
The new cluster is expected online in April 2025 (retaining nodes from the last four years). The current Barkla platform features:
- 158 compute nodes,* each with 40 cores (two 20-core Intel Xeon Gold 6138 CPUs) and 384 GB RAM (9.6 GB/core).
- 2 large memory nodes - as compute nodes but with 1.1 TB RAM (27.5 GB/core).
- 4 accelerator nodes, each with 64 cores (one Intel Xeon Phi 7230 CPU; 4 threads/core) and 192 GB RAM (3 GB/core).
- 5 GPU nodes, each w/24 cores (two 12-core Intel Xeon E5-2650 CPUs), 4 GPUs (Nvidia Pascal / Volta), and 384 to 512 GB RAM.
- 14 GPU nodes,* each w/48 cores (two 24-core AMD EPYC CPUs), 2-4 GPUs (Nvidia Ampere), and 256 to 512 GB RAM.
- network storage includes 200 TB for ordinary work, and 360 TB parallel storage for I/O-bound work. Nodes connect via a 100 Gb/s Intel Omni-Path interconnect.
(*) while available to all, 90 compute- and 14 GPU-nodes are funded by certain research groups who have job priority.
Quick considerations
- Large memory nodes support jobs with memory requirements beyond those of the standard compute nodes. Certain problems, e.g. deep learning, often work best on a massively parallel GPU node.
- Additional visualisation nodes provide remote desktop GUI platforms for pre- or post- data work.
- While Barkla has several 100 TB of shared storage, this may need consideration if running hundreds of jobs. To ensure availability for everyone, job runtime is usually limited to three days.
- Job executables must be built for a Linux host (not Windows), but Barkla's range of pre-installed research apps and tools allows your research to start immediately.
Get started
Barkla is freely available to researchers registered for MWS. After considering the above advice on suitable HPC jobs, please first register via our self-service portal: Select Request > Accounts > Application to access high performance/throughput computing facilities.
In your application please briefly describe your project and detail:
- What software is needed?
- If using source code, what languages are used, e.g. C/C++, Python, Fortran, MATLAB, R
- Are additional libraries, packages, or other apps needed?
- Are these free of charge or commercial packages - are there any licensing restrictions?
- How long will jobs run for, and how much memory and disk space are needed? (estimate if unknown today)
- is special hardware needed, e.g. GPUs?
After Liverpool, researchers may benefit from opportunities at other regional/specialist (Tier-2, e.g. Bede) or national (Tier-1, e.g. ARCHER2) HPC facilities. For truly exceptional research, please consider a peer-reviewed application for free access to a world-leading Tier-0 facility, e.g. PRACE (~100,000 cores; typically 1,000 cores per job). This competitive route gives access to some of the largest supercomputers in the world. In all cases, please speak to our team for further advice and support.
Share your experience
If your research has benefitted from our services, please email us at hpc-support@liverpool.ac.uk to let us know. We'd love to see anything from articles and presentations, to theses and conference proceedings. This helps us to tailor our services for the future and helps us to secure funding for future facilities and projects.