Nos tutelles

Nos partenaires

annuaire

aigle

Rechercher




Accueil > Recherche > Les Axes et Activités de Recherche > Expériences et Modélisation en Astroparticules (EMA) > Fermi

CC IN2P3 Resources management policy (czar tasks)

publié le , mis à jour le


Storage

AFS

The CC IN2P3 provides different types of AFS spaces. Each user has a $HOME_DIR. In addition, experiments are granted with a $THRONG_DIR and a $GROUP_DIR. Only the $THRONG_DIR and the $HOME_DIR are backuped by the CC teams.

A general (in french and english) documentation given by CC about AFS $GROUP_DIR management can be found here.

The most important AFS space in the case of Fermi collaboration is the $GROUP_DIR, with a size of 170 GB and which is divided in two identical partitions.

The organization of this space has been decided by the group as following :

  • Each user has a private space of 200 MB. In some particular cases, a larger space has been allocated.
  • Larger spaces have been reserved for the so-called activities : Pipeline and Catalog (20000 MB each).
  • The "ground" directory (40000 MB) reproduces the structure of "ground" directory at SLAC and ACL permissions have been applied for every member of the group to be allowed to write in some parts of it (releases subdirectory for example).

In order to avoid I/O saturation of AFS servers, the files needed by Pipeline jobs have been spread over the two partitions. Then the scripts needed for the configuration of the jobs and some ancillary data are located in partition 2 (/afs/in2p3.fr/group/glast/Pipeline/PipelineConfig), whereas external libraries and releases are in partition 1 ($GROUP_DIR/ground).

SPS

The SPS storage space is managed by a GPFS system. The allocation given to Fermi collaboration is 2 Terabytes.

In order to correctly manage this space, it has been decided to separate the space used by individual users from the one needed for global activities, then,

  • A directory /sps/glast/users has been created, where each user has a 60 GB space.
  • 250 GB have been allocated to "data" and "catalog" each, and 500 GB to "Pipeline2".

The details of the state of the different SPS file sets are given here

Examples of AFS and GPFS commands

Practical examples can be found in AFSAdminScript and GPFSAdminScript. Only the czar login is allowed to execute these commands.

HPSS tape storage

No limit. The only restriction for the use of this storage system is not to store too small files, as data are retrieved by a mechanical automate device and too many files can diminish its performances. The definition of what is too small evolves with time, so the best is to look for it in the documentation given by CC, here.


Computing

Since 2007, the Fermi LAT collaboration has been using the Anastasie batch farm, which consists in mutualized workers, shared among different experiments (see details here). Two management systems coexist for the moment, GridEngine and BQS, but only the first one will remain at the end of 2011.

CPU time

4 million HS06 have been allocated to Fermi for 2011 (1 HS06 = 250 SI2). This amount is mainly dedicated to the Pipeline activity, for which an MoU between SLAC and CC guaranties a 600 cores long term availability (1200 since Jan 2011).

Unitary resources

CC teams use "unitary resources" to anticipate production needs, and to avoid saturation of some systems.

For the case of BQS batch system and for glast production, two unitary resources were created to avoid SPS system saturation : u_sps_glast and u_sps_glast_prod. They were needed to separate users production from Pipeline one. They define the maximum number of jobs that can be accepted in execution simultaneously and the number of jobs that can be queued at each clock step. The way to know what are the current parameters of a unitary resource is described here.