Experiment-day guidelines

From cctbx_xfel
Jump to: navigation, search

Before getting to the beamline

  • If spacegroup and unit cells of the samples are known, these can be used for indexing. Put these in your indexing phil file.
  • Make sure than an up to date meteorology phil file is available, as well as an accurate detz value.

As soon as you get to the beamline

  • Collect and average a dark run of at least 1000 images. Repeat this if anything happens that may affect the detector, e.g. ice crystals forming that may destroy pixels.
  • Set up a sample spreadsheet, minimum columns are run number and sample description. Other useful fields may include number of images, comments, 'hit' rate, number of indexed images. Some people like to have additional fields such as energy/wavelength, attenuation, sample injection or mounting details (e.g. consumption rate and volume). This is really a question of personal preference, but it is important to consider what is duplicated from the metadata, and that sometimes fewer fields can be better.

Quick file system overview

When data are acquired, they are first passed to the DAQ (Data AQuisition) systems. These data are multiplexed into five or six streams in order to allow the high-speed readoff. This can be seen in the _sXX_ part of XTC filenames (s for stream). These streams are written to the FFB filesystem located at /reg/d/ffb, which is a fast flash drive system to be used only while online. Files are first written as XXX.inprogress during data collection, then the extension is changed to XXX.xtc once data acquisition is complete. Do not attempt to process data which is in progress, since pyana will crash when it hits the end of the file.

Data are copied to the PSDM file format for offline processing, but this is not so fast. Therefore, when online, data should be read from the FFB filesystem, and written to the PSDM system (/reg/d/psdm). When offline, read and write from PSDM. Do not read from ffb when not at the beamline.

Programs worth having up online

  • mod_viewer to view data streams as soon as they are collected, this shows the raw data. An example config file is in tutorial setup directory as test.cfg Preparatory Steps. There are two parameters that can be controlled here: n_update, which defines the spacing (in image number) of the windows, and n_collate, which defines the size of the averaging windows. Typical values could be 120 for both, which will display 1s averages of the data, or n_update=120 and n_collate=1, which will display the first image of every 1s (at 120 Hz). This will allow immediate visual inspection of data quality.
  • Light average/max to see aggregate of whole run. See Preparatory steps. The max is particularly useful as a virtual powder pattern to test for diffraction.
  • Either process the whole dataset, or just run a hitfinder with real-time logging to see the data quality semi-quantiatvely. See Progress monitoring. This will be highly sensitive to the choices made in the hitfinding and indexing cfg and phil files. If doing a quick hitfinder, it may be worth also submitting jobs to the queue for initial hitfinding and indexing. It is important to note that each sample will have its own optimal hitfinding and indexing parameters.

Because indexing requires some iteration of the parameters, a sensible approach to generating on-line metrics is to focus on mod_viewer, average/max images, and reasonably permissive hitfinding. A good first guess at a suitable hitfinding threshold can be obtained by inspection of the high resolution edges of light average/max images. Integration and merging on successful data collections can then be performed during any downtime, or while online metrics are working appropriately.