Resources
Max used physical non-swap i386 memory size |
0 |
Max used physical non-swap x86_64 memory size |
4000 |
Max size of scratch space used by jobs |
20000 |
Max time of job execution |
0 |
Job wall clock time limit |
0 |
number cores |
min number cores : |
|
pref number cores : |
|
max number cores : |
|
number of ram |
min number ram : |
|
pref number ram : |
|
max number ram : |
|
scratch space values |
min scratch space values : |
|
pref scratch space values : |
|
max scratch space values : |
|
Cloud Resources
CPU Core |
|
VM Ram |
4000 |
Storage Size |
|
Other requirements
Further recommendations from LHCb for sites:
The amount of memory in the field "Max used physical non-swap X86_64 memory size" of the resources section is understood to be the physical memory (RSS) required per single process of a LHCb payload. Usually LHCb payloads consist of one "worker process", consuming the majority of memory, and several wrapper processes. The total amount of memory for all wrapper processes accounts for 1 GB which needs to be added as a requirement to the field "Max used physical non-swap X86_64 memory size" in case the memory of the whole process tree is monitored.
The amount of space in field "Max size of scratch space used by jobs", shall be interpreted as 50% each for downloaded input files and produced output files.
CPUs should support the x86_64_v2 instruction set (or later). Sites are requested to provide support for apptainer containers via user namespaces. There is no need for apptainer to be installed on the host and the availability of user namespaces can be checked by ensuring that /proc/sys/user/max_user_namespaces contains a large number.
The shared software area shall be provided via CVMFS. LHCb uses the mount points
"/cvmfs/lhcb.cern.ch/",
"/cvmfs/lhcb-condb.cern.ch/",
"/cvmfs/lhcbdev.cern.ch/",
"/cvmfs/unpacked.cern.ch/",
"/cvmfs/cernvm-prod.cern.ch/",
on the worker nodes.
Provisioning of a reasonable number of slots per disk server, proportional to the maximum number of concurrent jobs at the site.
Non T1 sites providing CVMFS, direct HTCondorCE, ARC, or CREAM submission and the requested amount of local scratch space will be considered as candidates for additional workloads (e.g. data reprocessing campaign).
Sites with disk storage must provide:
- an xroot endpoint (single DNS entry), at least for reading
- an HTTPS endpoint (single DNS entry), both read and write, supporting Third Party Copy
- a way to do the accounting (preferably following the WLCG TF standard: https://twiki.cern.ch/twiki/bin/view/LCG/StorageSpaceAccounting)
Sites with tape storage should be accessible from the other Tier1 and Tier2 sites. They should provide one of the supported WLCG tape systems (dCache or CTA). Tape classes to optimize data distribution is to be discusses on a per-site basis.