A. HPC Service

The BDLab platform provides researchers to run and execute their own programs and applications in multiple CPU cores, nodes and GPU environment with the parallel file system support. The job queue scheduling system is used for this platform for a user to run his/her programs/applications. A user requires to submit a job into different job queues for different resource requirements, like the maximum no. of computing nodes, GPUs, maximum no. of CPU cores in a single computing node, etc.

To run a code on the BDLab platform, the following job queue can be selected as per your application requirements:

Job queue Max no. of nodes No. of CPU cores per node No. of GPU card per node Useable memory per node (GB)
q2s01 25 26 N/A 100
q4s01 3 64 N/A 1000
qgpu01 2 28 1 100
3 32 2 100
2 32 4 100
qmic01 2 68 N/A 120
Job queue Max no. of CPU core for a job Max no. concurrent job per user Max no. of job(s) able to be submitted concurrently per user Maximum run time limit (Walltime) for jobs (hrs)
q2s01 104 6 10 168
q4s01 104 6 10 168
qgpu01 104 6 10 168
qmic01 104 6 10 168

For the different demands from different projects, a user should let us know what computing resources they need, like no. of CPU cores, GPU, storage size as well as the applications to setup on the BDL platform. We are pleased to work with you to setup and build for running your desired programs/application. For such requirements, users should let us know their plans to reserve the resources.

B. Virtual Machine Service

This service provides researchers to have their own virtual machine(s) (VM) to run and test their applications needed with highly control with operation system. Users could install and highly customize their application in this VM environment.

C. JupyterHub Service

This is JupyterLab online services supporting GPU cards. User could use this GPU-enabled JupyterLab environment to develop and run the research application supporting GPU like Tensorflow and Keras.