Main navigation
- About Us
- Facilities
-
Services
- Wireless LAN
- CSLab Computer Accounts
- CSLab VPN (Sonicwall)
- CSLab SSH Gateway
- High Throughput CPU Cluster 1 (HTCC1)
- High Throughput GPU Cluster 1 (HTGC1)
- High Throughput GPU Cluster 2 (HTGC2)
- High Throughput GPU Cluster 3 (HTGC3)
- CSLab Campus-wide Remote Desktop Service
- MacOS Remote Desktop Service
- iMac connect home drive
- Remote Desktop Gateway
- Supports
- Guidelines
High Throughput CPU Cluster (HTCC1)
General Information
The CS High Throughput CPU Cluster (HTCC1) is a local implementation of the HTCondor job submission and queuing system. This system provides process-level parallelization for computation-intensive tasks. Thousands of computing jobs can be submitted in a single batch command in HTCondor. All CS staff and students having a valid CSLab UNIX account are eligible to use it.
For information on CSLab computer account, please visit
Service / CSLab Computer Accounts.
CSLab provides the following administrative and support services for the HTCC:
- System administration and performance monitoring
- Software installation and maintenance
- User account management
- Job queue management
- Allocation of system resources such as disk quota, CPU shares and etc.
The Cluster
Currently, the HTCC is composed of one job preparation/submission node and 96 job execution slots. Each slot has one Intel(R) Xeon(R) Platinum 8280 CPU @ 2.70GHz and 10GB of RAM.
The HTCC can be accessed by any Secure-Shell Clients connecting to
htcc1.cs.cityu.edu.hk (within the CS departmental network)
HTCondor commands can be found under /usr/bin
htcc1.cs.cityu.edu.hk is an alias for host htcc1a which is the HTCondor submission node. The job execution node is htcc1b. All regular Linux programs/scripts can run on them.
The job submission procedure has been listed in the logon message of htcc1. Demo package and user's guide can be found at folder /public/condor_demo
All processes running on the cluster are limited to 32GB memory size. Processes that exceed this limit will be terminated automatically.
User Data
Besides users' home directories, all nodes in the HTCC mount a shared NFS storage on path '/public'. Users can make their own folder there. Each user can use up to 200GB of disk space in '/public'. '/public' is hosted on a high-speed storage system and users are recommended to use it for data involved in the computation. However, data and program results should NOT be kept in '/public' for archiving. There is no backup for files in '/public' and all files not accessed for 30 days will be removed.