By default, particles are read every iteration from the location specified in the input STAR file.

Xxx pre cam-6

Xxx pre cam video

In general, you most likely will want to run a single MPI process on each GPU.

You could then just use the Note that 3D auto-refinement always needs to be run with at least 3 MPI processes (a master, and one slave for each half-set).

We often use this option if we don't have enough RAM to read in all the particles, but we have large enough fast SSD scratch disk(s) (e.g. Each node of our cluster has at least 64GB RAM, and an Intel(R) Xeon(R) CPU E5-2667 0 (@ 2.90GHz).

The 12 cores of each node are hyperthreaded, so each physical core appears as two cores to the operating system.

Using this option, they are all read into RAM once, at the very beginning of the job instead.

We often use this option if the machine has enough RAM (more than N*boxsize*boxsize*4 bytes) to store all N particles.

The implemented GPU-support is compatible with cuda compute capability 3.5 or higher.

See wikipedia's CUDA page for a complete list of such GPUs.

This default behaviour will likely be the preferred one if you do not need to share the computer with anyone else, and you only want to run a single job at a time.