SCONE Guide
SCONE Verification Notice
SCONE is a research-oriented code with a smaller user base than MCNP, OpenMC, or SERPENT. Our examples are intended as educational guidance. For authoritative syntax, physics options, and nuclear data requirements, consult the official documentation.
Parallel Computing in SCONE
OpenMP Shared-Memory Parallelism
No Parallel Block in Input
SCONE does not have a parallel { mode = ... } input block. Parallelism is controlled at compile time (OpenMP flags) and runtime (environment variables). There is no MPI, hybrid mode, domain decomposition, or load balancing as input-deck features.
How SCONE Parallelizes
SCONE runs in parallel using OpenMP (shared memory). Each particle history is independent, so histories can be distributed across threads. Parallelism is controlled by:
- Compile flags: Enable OpenMP when building SCONE (e.g.,
-fopenmpfor gfortran). - Environment variables: Set
OMP_NUM_THREADSbefore running. - Input settings:
pop,active, andinactiveaffect total work and parallel efficiency.
Compiling with OpenMP
SCONE must be built with OpenMP support. Typical compiler flags:
# Compile SCONE with OpenMP support (typical flags)
gfortran -fopenmp -O2 -o scone scone_main.f90 [other sources...]
# Or with Intel Fortran
ifort -qopenmp -O2 -o scone scone_main.f90 [other sources...]Running with Multiple Threads
Set OMP_NUM_THREADS before launching the executable. The value should match the number of physical cores (or logical cores) you want to use.
# Run with 4 OpenMP threads (Linux/macOS)
export OMP_NUM_THREADS=4
./build/scone.out input.inp
# Or inline for a single run
OMP_NUM_THREADS=4 ./build/scone.out input.inp
# Windows (PowerShell)
$env:OMP_NUM_THREADS = 4
.\build\scone.out input.inpInput Settings That Affect Parallel Performance
The pop, active, and inactive parameters control how many particles and cycles are run. Larger values increase total work and can improve parallel scaling, but also increase runtime and memory.
type eigenPhysicsPackage;
pop 100000; active 300; inactive 200;
XSdata ce; dataType ce;
collisionOperator { neutronCE { type neutronCEstd; } }
transportOperator { type transportOperatorDT; }
geometry { type geometryStd; boundary (0 0 0 0 0 0); graph { type shrunk; } surfaces { ... } cells { ... } universes { ... } }
nuclearData { handles { ce { type aceNeutronDatabase; aceLibrary /path; } } materials { ... } }
inactiveTally {}
activeTally { fissionRate { type collisionClerk; response (fission); fission { type macroResponse; MT -6; } } }Key Parameters
pop: Total particles per cycle (e.g., 100000).active: Number of active cycles for tally accumulation.inactive: Number of inactive cycles for source convergence.
What SCONE Does Not Support
The following are not available in SCONE:
- MPI (distributed memory across nodes)
- Hybrid MPI+OpenMP mode
- Domain decomposition
- Load balancing as an input-deck feature
parallel { mode = ... }or any parallel input blocksettings { ... }input block
Best Practices
- Match
OMP_NUM_THREADSto available cores. - Use enough
popandactivecycles for statistically meaningful results. - Ensure sufficient
inactivecycles for source convergence. - For large problems, consider memory per thread when scaling.
Next Steps
SCONE is designed for extensibility through Fortran source code modification. The Extensions section explains how to add new tally clerks, surface types, or physics packages.