Page tree
Skip to end of metadata
Go to start of metadata

      HPCC maintains several clusters. They are named according to the year of installation. Each cluster has very similar hardware with specific processors but has some variety in configuration, such as equipped with different co-processors, different size of memory or different number of CPUs. However, with many different kinds of configurations, HPCC only use one single-queue system. Jobs submitted to our main queue can run on any possible nodes unless there are specifications on cluster constraints. Users only have to specify resource requirements and our scheduler can assign your job to an appropriate cluster.

      Prior to July 2018, the MSU HPCC used Torque exclusively for resource management, and Moab for job management. On October 15, 2018 all clusters were upgraded to CentOS 7 and use the SLURM resource manager. (See 2018 Environment Update and Migration for details.) They are connected to each other with scratch file system through infinite band. Each node is also connected to the network file systems. (For example, your home and research space is available and identical on all nodes.)

      The following listed nodes are currently available to run jobs and you may submit jobs to use them from any development nodes.


Cluster TypeNode NamesNode Num.Processors

Cores /Node

Memory /Node

Disk Size /Node

GPUs /Node

intel14

 csm (11 nodes) & css (10 nodes)

21Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz20240 GB416 GB

css-[002-003,020,023,032-035]

8Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz20115 GB416 GB

css nodes

71Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz2052 GB416 GB
intel14-k20

csn-[001-039]

39Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz20115 GB416 GBk20 (2)
intel14-phi

csp-[006,016-020,025-026]

8Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz20115 GB416 GBPhi card (2)

intel14-xl

 qml-003

1Four Intel Xeon CPU E7-8857 v2 @ 3.00GHz48969 GB1.8 TB

qml-[001-002,004]3Four Intel Xeon CPU E7-8857 v2 @ 3.00GHz481.45 TB897 GB

qml-0001Four Intel Xeon CPU E7-8857 v2 @ 3.00GHz482.93TB1.1 TB

qml-0051Eight Intel Xeon CPU E7-8857 v2 @ 3.00GHz965.86 TB1.8 TB
intel16lac-[250-253,256-261,302-317]26Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz28492 GB190 GB

lac-[224-225,228-248, 278-285,294-301]39Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz28240 GB190 GB

lac-[000-023,032-191,200-223, 254,255,276,277,318-341, 350-369,372,372-445]

313

Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz28115 GB190 GB
intel16-k80

lac-[024-031,080-087,136-143, 192-199,286-293,342-349]

48Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz28240 GB190 GBk80 (8)
intel16-xlvim-[000,001]2Intel(R) Xeon(R) CPU E7-8867 v3 @ 2.50GHz642.93 TB860 GB

vim-0021Intel(R) Xeon(R) CPU E7-8867 v4 @ 2.40GHz1445.86 TB3.7 TB
intel18skl-[000-112]113Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz4083 GB

413 GB



skl-[113-131]19Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz40178 GB

413 GB



skl-[132-139,148-167]28Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz40367 GB

413 GB



skl-[140-147]8Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz40745 GB

413 GB


intel18-v100nvl-[000-007]8Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz40367 GB

413 GB

v100(8)
amd20amr-[000-101], amr-[137-209], amr-127, test-amr-[000-001]178AMD EPYC 7H12 Processor @2.595 GHz

128

493 GB

412 GB

amr-[104-26], amr-[128-136]32AMD EPYC 7H12 Processor @2.595 GHz

128

996 GB

412 GB

amr-[102-103]2AMD EPYC 7H12 Processor @2.595 GHz

128

2005 GB

412 GB
amd20-v100

nvf-[000-008]

   9Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz

48

178 GB

412 GB v100s(4)


Here is the statistic numbers of all clusters:

Clusters NodesCoresMem. (Tb)GPUs

    

intel14

csm & css

100

         153

2000

       3,276

9.45

      28.9


csn (k20)

39

780

4.39

78

csp (phi)

8

160

0.9


qml (-xl)

6

336

14.1



intel16

lac general

378


429

10,584


12,200

56.93


79.9


lac (k80)

48

1,344

11.25

384

vim (-xl)

3

272

11.72



intel18


nvl

8


176

320


7,040

2.87


31.3

   64

skl

168

6,720

28.4



amd20

amr

212


221


27,136
27,568
121
122.6

nvf

9

4321.5736

Total

979

50,084

262.7

562


P.S. Data is updated on . For more recent information, please run powertools command "node_status" on a HPCC node.







  • No labels