{"id":210,"date":"2023-06-12T11:05:53","date_gmt":"2023-06-12T15:05:53","guid":{"rendered":"https:\/\/wwwdev.teach.cs.toronto.edu\/?page_id=210"},"modified":"2023-06-12T11:46:47","modified_gmt":"2023-06-12T15:46:47","slug":"remote-gpu-computing","status":"publish","type":"page","link":"https:\/\/wwwdev.teach.cs.toronto.edu\/using-labs\/remote-gpu-computing\/","title":{"rendered":"Remote GPU Computing"},"content":{"rendered":"\n

The Teaching Labs have a small computing cluster. It is meant for teaching distributed computing, scientific computing, GPU programming, and the like; it is not powerful enough nor intended for production computation. Access is allowed only to students registered in specific courses.<\/p>\n\n\n\n

Cluster systems can be accessed only through the Slurm workload manager<\/a>; direct login (e.g. with ssh<\/em>) is not allowed. See below for a list of basic Slurm commands<\/a>. Your instructor should offer details, in particular which partitions are available to your course.<\/p>\n\n\n\n

Nodes and partitions<\/h5>\n\n\n\n

The cluster contains these nodes: coral01-coral08<\/code> 8 rack-mounted Supermicro 1019GP-TT systems, each with:<\/p>\n\n\n\n