SERPENT Guide
Running Simulations in Serpent
Population control, boundary conditions, cross-section libraries, execution modes, and parallel computing for production calculations
Simulation Settings Basics
Serpent simulations are configured through set cards in the input file. The most fundamental settings specify neutron population, boundary conditions, cross-section data path, and optionally a random number seed for reproducibility.
Fundamental Simulation Settings
% --- Simulation settings ---
set pop 10000 100 20 % Population size, active cycles, inactive cycles
set bc 2 % Boundary conditions (2 = reflective)
set acelib "sss_endfb7u.xsdata" % Path to cross-section library
set seed 1234567 % Random number seed (optional)Population Control
The set pop card controls neutrons per cycle and cycle counts. The syntax is set pop NPOP NACT NINACT: NPOP neutrons per cycle, NACT active cycles for tallying, and NINACT inactive cycles for fission source convergence. Inactive cycles are essential because the initial source distribution is a poor approximation to the true eigenmode, and results tallied before convergence carry a systematic bias that cannot be reduced by running more active cycles.
Population Control Syntax
set pop NPOP NACT NINACTNPOP specifies the number of neutrons per cycle, NACT the number of active cycles for tallying results, and NINACT the number of inactive (discard) cycles for source convergence.
For simple models (pin cells, small assemblies), 10,000–50,000 neutrons per cycle with 100–200 active and 20–50 inactive cycles typically suffice. Full-core models require 100,000–1,000,000 neutrons per cycle with 200–500 active and 50–100 inactive cycles for acceptable pin-level uncertainties. Statistical uncertainty decreases as 1/sqrt(NPOP × NACT).
Start with small test populations (1,000 neutrons, 10 active, 5 inactive cycles) to check for geometry errors and material problems before committing to a production run.
Boundary Conditions
Boundary conditions (set bc) define neutron behavior at the geometry boundary. Type 1 (vacuum) terminates neutrons at the boundary — appropriate for complete reactor models. Type 2 (reflective) mirrors trajectories, simulating infinite repetition — used for pin cell and assembly calculations or symmetry models. Type 3 (periodic) wraps neutrons to the opposite boundary for translational periodicity.
Boundary Condition Specification
set bc BOUNDARY_TYPEA single value applies the same condition to all boundaries. Type 1 is vacuum (neutrons escape), type 2 is reflective (neutrons mirror), and type 3 is periodic (neutrons wrap around).
Different conditions can be applied per dimension by specifying multiple values. For example, a 3D assembly model might use reflective in x and y (infinite array) with vacuum in z (finite height). Boundary condition choice significantly affects the eigenvalue and flux distributions.
Dimension-Specific Boundary Conditions
set bc 2 1 2 % Reflective in X, vacuum in Y, reflective in ZPin cell models typically use reflective boundaries on all faces to simulate an infinite lattice. 3D assembly models use reflective conditions on x and y faces with vacuum on z faces. Full-core models generally use vacuum boundaries on all faces.
Cross-Section and Library Settings
The set acelib card points to the directory file indexing ACE-format cross-section data. Most installations include standard ENDF/B-VII or ENDF/B-VIII libraries. Additional libraries are needed for specific applications: set declib for decay data (burnup/activation), set nfylib for fission yields (depletion), and set sfylib for spontaneous fission yields. Paths must be correct — either absolute or relative to the working directory.
Nuclear Data Library Specification
set acelib "sss_endfb7u.xsdata" % Path to cross-section libraryAdditional Data Libraries
set declib "sss_endfb7.dec" % Decay data library
set nfylib "sss_endfb7.nfy" % Fission yield library
set sfylib "sss_endfb7.sfy" % Spontaneous fission yield dataSimulation Modes
The default mode is the criticality (k-eigenvalue) calculation, solving for k-effective and the associated flux distribution. No special mode selection is required — specifying set pop triggers power iteration with automatic fission source convergence and tally accumulation.
Criticality Mode
set pop 50000 200 50 % Population size, active cycles, inactive cyclesExternal source mode handles fixed-source problems (shielding, detector response, subcritical systems). Sources are defined with the src card, and set nps replaces set pop. There are no inactive cycles — all particles contribute to tallies. The source definition must include spatial distribution, energy spectrum, and angular characteristics.
External Source Mode
% Define a point source of 14 MeV neutrons
src 1
sp 0.0 0.0 0.0
se 14.0
% Run in external source mode
set nps 1000000 % Number of source neutrons to simulateIn external source mode, use set nps to specify the total number of source particles rather than set pop, since there is no cycle-based iteration.
Running the Simulation
The Serpent 2 executable (sss2) is invoked with the input file path. For OpenMP threading, pass -omp N on the command line — not a set omp style directive inside the input deck.
Command-Line Execution
sss2 input_file.inpExecution with OpenMP Threading
sss2 -omp 8 input_file.inp # Run with 8 OpenMP threadsFor MPI distributed-memory execution, launch via mpirun -np N sss2 input_file (requires MPI-compiled build). Consult the Serpent 2 documentation for version-specific command-line flags.
During execution, Serpent prints the current cycle, running k-effective estimate with uncertainty, simulation speed, and memory usage. Use nohup on Linux to persist long-running calculations after disconnecting.
Performance Considerations
Memory requirements depend on geometric complexity, number of materials/nuclides, population size, and tally dimensionality. The -memstat option (verify availability in your Serpent 2 version) reports detailed memory allocation statistics.
Memory Monitoring
sss2 -memstat input_file.inpSerpent supports OpenMP (shared-memory) and MPI (distributed-memory) parallelization. OpenMP scales well up to the number of physical CPU cores; hyperthreading provides marginal benefit and may degrade performance due to memory bandwidth contention. MPI distributes across multiple nodes when the model exceeds single-node memory. Hybrid MPI+OpenMP provides the best scaling on HPC clusters.
Parallel Execution Options
# OpenMP (multi-threading on a single machine)
sss2 -omp 16 input_file.inp
# MPI (distributed across multiple nodes)
mpirun -np 64 sss2 input_file.inpOpenMP scales well up to the number of physical cores on a single machine. For very large models, MPI provides scaling across multiple compute nodes, and the two can be combined for hybrid parallelization.
Complete Simulation Setup Example
The following example combines simulation settings for a pin cell model. For many problems, only set pop, set bc, and set acelib are needed; additional options are added as required.
Comprehensive Pin Cell Configuration
% --- Simulation settings section ---
% Cross-section library path
set acelib "sss_endfb7u.xsdata"
% Population control
set pop 50000 200 50 % 50k neutrons, 200 active, 50 inactive cycles
% Boundary conditions
set bc 2 % Reflective on all sides (infinite lattice)
% Random seed (optional, for reproducibility)
set seed 1234567
% Neutron physics options
set nfg 4 % Use 4-group structure for few-group constants
set ures 1 % Enable unresolved resonance probability table sampling
plot 3 500 500 % XY cross-section geometry plot (500x500 pixels)
% Optional settings for faster simulation
set opti 1 % Enable optimization mode
% --- Output control ---
set printm 1 % Print material compositions
set power 40000 % Normalize to 40 kW powerMost calculations need only population, boundary condition, and library settings. Add physics and output options incrementally as needed.