Virtual CERN Analysis Facility for administrators

Table of contents

Who can control it

The Virutal CERN Analysis Facility runs on top of CERN OpenStack.

ALICE has a special OpenStack project for VCAF virtual machines called ALICE CERN Analysis Facility. All persons belonging to the CERN egroup alice-vaf can launch, terminate and control virtual machines on this OpenStack project.

Only persons belonging to the egroup alice-agile-admin can add and remove members from the alice-vaf egroup.

Architecture overview

The Virtual CAF is a HTCondor batch cluster. Even though any batch job can be run on top of it, the Virtual CAF is intended to run PROOF via PROOF on Demand (PoD).

Thanks to a set of environment scripts the batch system is invisible to the end user.

Virtual machines and contextualization

The Virtual CAF is a cluster made of several virtual machines, sharing a single base image. The base image is CernVM and the contextualization uses cloud-init. This specific Virtual CAF configuration is made to work from the CERN network and with the virtual machines having a CERN registered IP address.

A vanilla implementation of the VAF featuring automatic scalability, that works on any cloud even outside CERN, can be graphically configured online using the CernVM Online portal by following the instructions here.

The single base image is contextualized via cloud-config manifests interpreted at boot time by cloud-init.

Node classes

There are three classes of nodes: as said the virtual base image is the same, but the contextualization changes.

How to launch a new cluster

As system administrator you will have to perform the following steps:

  1. Launch the single head node
  2. Launch the number of submit nodes you wish
  3. Configure the head node to control workers deployment automatically

Points 1. and 2. are performed from the CERN OpenStack web interface, under the ALICE CERN Analysis Facility project.

Registering the key

When launching a new node, you might notice that the AliceVCAF key is not available in your list of keys. This is because OpenStack SSH keys are per user and not per project.

In practice you need to register the same key with the same name under your user for convenience.

The public key can be found here:


Go on the CERN OpenStack web interface, go to Access & Security, then Import Key Pair.

Use AliceVCAF as the key name (do not use an arbitrary name!) and paste the given public key in the textbox.

The corresponding private key, readable from AFS only by the admins, is:


Starting the head node

Click on the Launch Instance button, then:

On the Access & Security tab you must pick the AliceVCAF keypair.

On the Post-Creation tab, provide the contextualization file cern-vaf-head.txt from the private area. This file is not public as it contains sensitive information.

That is it, you can click the Launch button and wait.

Starting multiple submit nodes

Click on the Launch Instance button, then:

On the Access & Security tab you must pick the AliceVCAF keypair.

On the Post-Creation tab, provide the contextualization file cern-vaf-submit.txt from the private area. This file is not public as i contains sensitive information.

That is it, you can click the Launch button and wait.

Managing worker nodes

Worker nodes are launched by the elastiq daemon, which is turned off and unconfigured by default.

The configuration file is available from the private area and it is called elastiq.conf.

This file contains in particular a base64 one-line long string which contains the contextualization to use for the worker nodes.

In principle elastiq can be configured to automatically turn on and off nodes based on the effective use of the cluster: since VM deployment appears to be slow on CERN OpenStack we are configuring it with an identical minimum and a maximum quota to always keep the same number of VMs alive whatever the effective use is.

Once configured, elastiq should be (re)started:

service elastiq restart

It is convenient to use elastiq as it automatically detects VMs in error and tries to recover.

Once elastiq is up and running, it is possible to, for instance, periodically refresh or update a stuck cluster by simply deleting from the OpenStack interface all the VMs whose name starts with server-: elastiq will detect the missing VMs and will respawn them automatically.

Logging in to the virtual machines

Virtual machines are configured to allow unprivileged logins from all CERN users from the alice-member egroup using their CERN password with Kerberos authentication. An AFS token is also automatically created at login.

In addition, non-ALICE users can be granted VAF access if they are members of alice-vaf-external-users.

For administering the machine you need to have the private key corresponding to the AliceVCAF key you have used when starting the virtual machines from OpenStack.

You cannot login as root; instead, you will log in as user cloud-user:

ssh -i ~/.ssh/AliceVCAF.pem

This user has passwordless sudo privileges. To become root:

sudo -sE

Configuration on AFS

Most of the configuration is read directly from AFS. Everything is stored under:


The AliceVaf.par file

This file is a special PARfile required to load the ALICE environment. It is stored in the aforementioned directory, and users load it directly from there.

The file is generated following the procedure from here.

Replacing this package means updating it for all users.

User quotas

The condor directory contains a file with all the necessary information to enforce user quotas on the cluster.

There are three variables to configure (do not touch the rest!):

Change this file directly on AFS. After changing it, Condor needs to be reloaded on the head node and all submit nodes! Note that it is sufficient to reload it and it is not needed to perform a costly restart operation:

service condor reload

ALICE environment

The etc directory contains scripts read by vaf-enter when setting the user environment. Changing the files from AFS affects all VAF users immediately, no need to restart any service (they will have to re-enter the VAF environment though).

See here for more information.

Private configuration

The private directory is not readable by users. It contains:

Make the private directory “private”!

Since we are on AFS, chmod 0700 will not be sufficient! From a privileged user (alibrary is the service account owning the files):

cd /afs/
fs setacl -dir $PWD -acl system:anyuser none
fs setacl -dir $PWD -acl dberzano write
fs setacl -dir $PWD -acl litmaath write

AFS permissions are explained here: in our case we are giving users dberzano (Dario Berzano) and litmaath (Maarten Litmaath) write permissions, i.e. they can change files (but not the ACL).

To list current permissions (la means listacl):

$> fs la /afs/
Access list for /afs/ is
Normal rights:
  z2:admin rlidwka
  system:administrators rlidwka
  litmaath rlidwk
  dberzano rlidwk

It is critical that information inside this directory is private!


Monitoring daemon and scripts can be found on AFS:


To enable monitoring on all nodes put this line in the crontab for user cloud-user (or any other unprivileged one):

@reboot  /afs/
0 * * * *  /afs/

On the master node add the -master parameter:

@reboot  /afs/ -master
0 * * * *  /afs/ -master

Monitoring information

VAF monitoring occurs via ApMon and it is visible on MonALISA. Every host sends default ApMon information (such as bogomips and network traffic), plus the following variable:

Default monitoring information is sent to the following cluster:


The master node sends the following information:

Global information is sent to the following cluster:


All information is sent to

Parallel SSH on all VAF nodes

A convenient mpssh slightly adapted for VAF is available. To use it, run:

/afs/ [-f 'filter'] 'remote command line'

The correct key, readable only by VAF admins from AFS, will be used, and all commands are executed as cloud-user. To execute commands as root simply append sudo to the remote command line.

The list of nodes is grabbed automatically via the EC2 interface for the correct tenant. The optional -f <filter> option allows specifying an additional grep to filter (out) the desired hosts: this filter will be applied to the euca-describe-instances output.

For example, to execute uptime on all head nodes, whose name contains alivaf:

/afs/ -f alivaf uptime

To execute the same command on all nodes not matching alivaf:

/afs/ -f '-v alivaf' uptime

Web hosting of example files

For those sites that do not have AFS there is an active CERN webservice where some files can be downloaded.

This web space belongs to the service account alibrary and, once logged in, can be administered from here.

The user has the following AFS directory exported:


which maps to this URL (forbidden by default).

The vaf subdirectory contains a symbolic link to the AliceVaf.par and the examples:

Changes in the PARfile and in the examples are immediately visible on HTTP.

Common problems

Disk full

In case something goes wrong, stale data might fill up the disks. In case this happens, the quickest way to solve the problem is to identify the culprits by either using the monitoring (you can sort by the Disk free column) and checking the available spaces, or by running:

/afs/ -f '-v alivaf' 'df -h /'

Nodes with a low disk space can be fixed on the fly by simply deleting them from the OpenStack interface, and elastiq will bring new ones up again automatically. This is by far the most effective solution.

For a more fine-grained approach, bear in mind that the directories that tend to fill up are /tmp and the Condor execute directories. Just stop Condor, clean them up and restart it:

/afs/ -f '-v alivaf' \
  'sudo service condor stop; sudo rm -rf /tmp/*; sudo rm -rf /var/lib/condor/execute/*; sudo service condor start'