Project

General

Profile

New cluster » History » Version 15

Igor Zinchenko, 08/30/2022 10:24 AM

1 15 Igor Zinchenko
h1. New computing cluster in Koeniginstrasse
2 1 Martin Kuemmel
3 1 Martin Kuemmel
h2. Introduction
4 1 Martin Kuemmel
5 15 Igor Zinchenko
Since January 2022 we have a new computing cluster which is installed int he server room of the physiscs department at Koeniginstrasse.
6 1 Martin Kuemmel
7 3 Martin Kuemmel
h2. Hardware
8 3 Martin Kuemmel
9 3 Martin Kuemmel
* there are in total 8 compute nodes avalable;
10 3 Martin Kuemmel
* the compute nodes are named "usm-cl-bt01n[1-4]" and "usm-cl-bt02n[1-4]";
11 1 Martin Kuemmel
* each node has 128 cores;
12 3 Martin Kuemmel
* each node has 500Gb available;
13 15 Igor Zinchenko
* one storage for our group has 686Tb (/project/ls-mohr);
14 3 Martin Kuemmel
15 1 Martin Kuemmel
h2. Login
16 1 Martin Kuemmel
17 1 Martin Kuemmel
* public login server: login.physik.uni-muenchen.de;
18 1 Martin Kuemmel
* Jupyterhub: https://workshop.physik.uni-muenchen.de;
19 1 Martin Kuemmel
* both the server and the Jupyterhub require a two-factor-authentication with your physics account pwd as the first authentication. Then you can use a smartphone app like Google Authenticator (or any other app that generates time-based one-time-passwords). The app needs to be registered here: https://otp.physik.uni-muenchen.de, it is there called a soft-token.
20 1 Martin Kuemmel
21 13 Martin Kuemmel
h2. Graphic Remote Login
22 13 Martin Kuemmel
23 13 Martin Kuemmel
A graphical remote login from outside the LMU network require a VPN connection. From June 2022 the only VPN connection  is provided by "eduVPN":https://doku.lrz.de/display/PUBLIC/VPN+-+eduVPN+-+Installation+und+Konfiguration. After establishing a VPN connection the login is then done with X2GO as explained "here":https://www.en.it.physik.uni-muenchen.de/dienste/netzwerk/rechnerzugriff/zugriff3/remote_login/index.html. I was pointed to using the following logins:
24 13 Martin Kuemmel
* cip-sv-login01.cip.physik.uni-muenchen.de
25 13 Martin Kuemmel
* cip-sv-login02.cip.physik.uni-muenchen.de
26 13 Martin Kuemmel
27 13 Martin Kuemmel
but I am assuming the connections for Garching work as well. X2GO opens a KDE desktop, and of course the machine can connect to our cluster.
28 13 Martin Kuemmel
29 1 Martin Kuemmel
30 1 Martin Kuemmel
h2. Processing
31 1 Martin Kuemmel
32 1 Martin Kuemmel
* as on our local cluster "slurm" is being used as the job scheduling system. Access to the computing nodes and running jobs requires starting a corresponding slurm job;
33 1 Martin Kuemmel
* the partition of our cluster is "usm-cl";
34 1 Martin Kuemmel
* from the login node you can start an interactive job via "intjob --partition=usm-cl" (additional slurm arguments are accepted as well);
35 8 Martin Kuemmel
* I created a "python script":https://cosmofs3.kosmo.physik.uni-muenchen.de/attachments/download/285/scontrol.py which provides information on our partition (which jobs are running on which node, the owner of the job and so on);
36 11 Martin Kuemmel
* I have also put together a rather silly "slurm script":https://cosmofs3.kosmo.physik.uni-muenchen.de/attachments/download/283/test.slurm which can be used as a starting point;
37 11 Martin Kuemmel
* note that it is possible to directly "ssh" to all nodes on which one of your batch jobs is running. This can help to supervise the processing;
38 1 Martin Kuemmel
39 1 Martin Kuemmel
h2. Disk space
40 1 Martin Kuemmel
41 1 Martin Kuemmel
* users can create their own disk space under "/project/ls-mohr/users/" such as "/project/ls-mohr/users/martin.kuemmel";
42 1 Martin Kuemmel
43 1 Martin Kuemmel
h2. Installed software
44 1 Martin Kuemmel
45 1 Martin Kuemmel
We use a package manager called spack to download and install software that is not directly available from the linux distribution. To see what is already installed, do the following on a computing node:
46 1 Martin Kuemmel
47 1 Martin Kuemmel
* "module load spack"
48 1 Martin Kuemmel
* "module avail"
49 1 Martin Kuemmel
50 1 Martin Kuemmel
Adding more software is not a problem.
51 9 Martin Kuemmel
52 10 Martin Kuemmel
h2. Euclid processing on the cluster
53 10 Martin Kuemmel
54 10 Martin Kuemmel
While OS, libraries and setup is different from EDEN-?.?, it is possible to load and run in an EDEN-3.0 environment using a container solution. The cluster offers "singularity":https://sylabs.io/guides/3.0/user-guide/quick_start.html as a container solution. While singularity is not officially supported in Euclid, it is being used in a limited role, and singularity is able to run docker images, which is the supported container format in Euclid. To work in an EDEN-3.0 on the new cluster you need to get the docker image doing:
55 10 Martin Kuemmel
* load singularity via:
56 10 Martin Kuemmel
  <pre>
57 10 Martin Kuemmel
  $ module load spack
58 10 Martin Kuemmel
  $ module load singularity</pre> Note that the singularity version which is directly available on the computing nodes at "/usr/bin/singularity" does *not* work. The correct version loaded via the modules is at "/software/opt/focal/x86_64/singularity/v3.8.1/bin/singularity".
59 14 Martin Kuemmel
* it is *recommended* to move the singularity cache to somewhere under "/scratch-local", e.g. via:<pre>$ mkdir -p /scratch-local/$USER/singularity
60 14 Martin Kuemmel
$ export SINGULARITY_CACHEDIR=/scratch-local/$USER/singularity</pre> On the default cache location "/home/$HOME/.cache/singularity" there are problems deleting the entire cache when leaving singularity.
61 10 Martin Kuemmel
* pull the Euclid docker image via: <pre>singularity pull --docker-login docker://gitlab.euclid-sgs.uk:4567/st-tools/ct_xodeen_builder/dockeen</pre> With the gitlab credentials the docker image is stored in the file "dockeen_latest.sif"
62 10 Martin Kuemmel
63 10 Martin Kuemmel
The docker image can be run interactively:
64 12 Martin Kuemmel
 <pre>$ singularity run --bind /cvmfs/euclid.in2p3.fr:/cvmfs/euclid.in2p3.fr --bind /cvmfs/euclid-dev.in2p3.fr:/cvmfs/euclid-dev.in2p3.fr <path_to>dockeen_latest.sif</pre>
65 10 Martin Kuemmel
It is also possible to directly issue a command in EDEN-3.0:
66 12 Martin Kuemmel
 <pre>$ singularity exec --bind /cvmfs/euclid.in2p3.fr:/cvmfs/euclid.in2p3.fr --bind /cvmfs/euclid-dev.in2p3.fr:/cvmfs/euclid-dev.in2p3.fr <path_to>dockeen_latest.sif  <command_name></pre>
67 10 Martin Kuemmel
In both cases the relevant EDEN environment must first be loaded with:
68 10 Martin Kuemmel
<pre>
69 10 Martin Kuemmel
$ source /cvmfs/euclid-dev.in2p3.fr/CentOS7/EDEN-3.0/bin/activate
70 10 Martin Kuemmel
</pre>
71 10 Martin Kuemmel
72 10 Martin Kuemmel
Information on the usage of singularity in Euclid is available at the "Euclid Redmine":https://euclid.roe.ac.uk/projects/codeen-users/wiki/EDEN_SINGULARITY.
73 10 Martin Kuemmel
74 9 Martin Kuemmel
h2. Support
75 9 Martin Kuemmel
76 9 Martin Kuemmel
Support is provided by the IT support (Rechnerbetriebsgruppe) of the LMU faculty of physics with the helpdesk email: helpdesk@physik.uni-muenchen.de. Please keep Joe Mohr and me (Martin Kuemmel: mkuemmel@usm.lmu.de) in the loop such that we can maintain an overview on the cluster performance.
Redmine Appliance - Powered by TurnKey Linux