High throughput computing
High throughput computing (HTC) uses huge sets of ordinary computers to tackle problems which would take prolonged time on a single PC.
Exploit parallel computing - HTC
Parallel computing, breaking problems into smaller parts and processing these simultaneously to tackle otherwise unfeasible work, is now essential in many research areas - from climate modelling and structural mechanics, to materials science and drug discovery. Research that could take years on a single PC may be completed in just hours.
High throughput computing (HTC) is a parallel computing route which uses huge pools of ordinary PCs to run batches of jobs (programs), each of which runs solely on a single PC. Jobs usually run completely independently, without further communication, and shouldn't need considerable processing power, memory, or storage space. Batches may finish several hundreds of times faster than attempting the same work serially on one PC, opening doors to new questions, such as exploring problem spaces. In many cases, the HTC approach may be superior to HPC options, simply from using massive sets of cheap, ordinary PCs.
The HTCondor platform
Liverpool's HTCondor platform consists of a pool of ~1900 teaching lab and library PCs running MWS (64-bit Windows 10). Pool size varies considerably depending on time of day, week, or year, particularly during summer vacation when centres may close for refurbishment. Each PC has an Intel Core i7 (four core) 3 GHz processor, 16 GB RAM, and 330 GB of disk space.
Jobs are queued according to PC availability. More efficient batches use jobs needing less disk space (a few GB) with shorter runtimes (~30 minutes). Longer jobs may be aborted by users' work or other MWS PC tasks. While aborted jobs can be restarted, it may be difficult to run jobs for more than 12 hours unless they can continue from where they were. C/C++, Fortran, MATLAB, or pre-existing job executables must be built for a Windows host.
Alternatively, please see our high performance computing (HPC) cluster, named Barkla, whose standard compute nodes each have 40 cores, 384 GB RAM (9.6 GB/core), with job runtimes up to three days (but expect long waits when Barkla is busy).
For further details on HTCondor and other platforms at Liverpool, including user guides, work procedures, and substantial technical information, please see our technical documents site (intranet or VPN).
Have questions for the HTCondor team? Considering working with us? Contact us at htc-support@liverpool.ac.uk or join our mailing list for the latest HTC news and events.
Get started
HTCondor is freely available to researchers registered for MWS. After considering the above advice on suitable jobs, please register via our self-service portal: Select Request > Accounts > Application to access high performance/throughput computing facilities. In your application please briefly describe your project and detail:
- What software is needed?
- If using source code, what languages are used, e.g. C/C++, Python, Fortran, MATLAB, R
- Are additional libraries, packages, or other apps needed?
- Are these free of charge or commercial packages - are there any licensing restrictions?
- How long will jobs run for, and how much memory and disk space are needed? (estimate if unknown today).
Share your experience
If your research has benefitted from our services, please email us at htc-support@liverpool.ac.uk to let us know. We'd love to see anything from articles and presentations, to theses and conference proceedings. This helps us to tailor our services for the future and helps us to secure funding for future facilities and projects.