MCNP Guide
Running MCNP Simulations
From setup to execution and monitoring
Essential Setup
Before running any MCNP simulation, you need to ensure your environment is properly configured. MCNP requires access to nuclear data libraries and proper system settings to function correctly. The most critical requirement is setting the DATAPATH environment variable to point to your cross-section data.
Environment Configuration
# Linux/Mac setup
export DATAPATH=/opt/mcnp/MCNP_DATA
export PATH=$PATH:/opt/mcnp/bin
# Windows setup
set DATAPATH=C:\MCNP\MCNP_DATA
set PATH=%PATH%;C:\MCNP\bin
# Verify setup
mcnp6 ver # Check version and data pathThe DATAPATH variable tells MCNP where to find nuclear cross-section libraries. Without this, MCNP cannot access the nuclear data needed for transport calculations. The PATH variable should include the MCNP executable directory so you can run mcnp6 from any location.
Running Your First Simulation
Once your environment is configured, running MCNP simulations becomes straightforward. The basic command structure uses simple keyword arguments to specify input files, output destinations, and execution options.
Basic Execution
# Simple run with default settings
mcnp6 i=uranium_sphere.i o=uranium_sphere.o
# Run with custom output file name
mcnp6 i=fuel_assembly.i n=fuel_results.out
# Continue from previous checkpoint
mcnp6 c i=fuel_assembly.i n=fuel_results.out runtpe=fuel_assembly.rThe 'i=' parameter specifies your input file, while 'o=' or 'n=' designates the output file. MCNP will create several additional files during execution, including a RUNTPE file for checkpointing and restart capabilities. The 'c' option continues a previous calculation from where it left off.
Monitoring Progress
MCNP provides real-time feedback during execution through screen output and log files. For criticality calculations, you'll see k-effective values converging as the simulation progresses. The output shows cycle numbers, particle counts, and statistical information.
# Run with verbose output
mcnp6 i=input.i o=output.o DBCN=1
# Monitor progress in real-time
tail -f output.o
# Check for completion
grep "final estimated" output.oThe DBCN=1 option provides additional debugging information, useful for tracking down problems. The tail command lets you monitor progress in real-time, while grep searches for completion indicators in the output file.
Parallel Processing
Modern MCNP simulations benefit significantly from parallel processing. MCNP can distribute particle histories across multiple CPU cores, dramatically reducing calculation time. The key is understanding how to specify the number of parallel tasks and manage memory allocation.
Shared Memory Parallelism
# Run with 8 parallel tasks on one node
mcnp6 i=input.i o=output.o tasks 8
# Specify memory per task (in MB)
mcnp6 i=input.i o=output.o tasks 8 dmp=2000
# Optimal task count (usually equals CPU cores)
mcnp6 i=input.i o=output.o tasks $(nproc)The tasks parameter should typically equal the number of CPU cores on your system. Each task needs adequate memory (usually 2-4 GB), specified by the dmp parameter. Too many tasks can actually slow down calculations due to memory contention and communication overhead.
Distributed Memory (MPI)
# Run across multiple nodes with MPI
mpirun -np 32 mcnp6.mpi i=input.i o=output.o
# SLURM job script example
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16
#SBATCH --time=24:00:00
#SBATCH --mem=64G
srun mcnp6.mpi i=reactor.i o=reactor.oMPI parallelism allows MCNP to run across multiple compute nodes, scaling to hundreds or thousands of cores. This requires the MPI-enabled version of MCNP (mcnp6.mpi) and appropriate job scheduler configuration. The total number of MPI ranks should balance communication overhead with computational efficiency.
Checkpoint and Restart
Long-running simulations need checkpoint capabilities to protect against system failures and time limits. MCNP automatically creates RUNTPE files that capture the complete simulation state, allowing seamless restarts from any checkpoint.
# Enable automatic checkpointing every 2 hours
mcnp6 i=input.i o=output.o PRDMP=2j 1
# Manual checkpoint during run (send signal)
kill -USR1 <mcnp_process_id>
# Restart from checkpoint
mcnp6 c i=input.i o=output_new.o runtpe=output.r
# Verify restart statistics match
grep "cycles" output.o output_new.oThe PRDMP card controls checkpoint frequency, where "2j" means every 2 hours and "1" indicates the number of RUNTPE files to keep. When restarting, MCNP continues from the exact state saved in the RUNTPE file, maintaining statistical consistency. Always verify that restarted runs show continuous cycle numbering.
Troubleshooting Common Issues
Geometry Problems
Lost particles are the most common MCNP error, usually indicating geometry problems where particles enter undefined regions or encounter surface intersections. MCNP provides detailed tracking information to help identify these issues.
# Enable lost particle tracking
mcnp6 i=input.i o=output.o LOST=10
# Plot geometry to check for gaps
mcnp6 i=input.i ip
# Trace specific problematic particles
mcnp6 i=input.i o=output.o PTR=1,5,10The LOST parameter saves detailed information about the first 10 lost particles, including their position, direction, and energy when they were lost. The plotting mode (ip) lets you visualize your geometry to identify gaps or overlaps. Particle tracing (PTR) follows specific particles through their entire history.
Performance Issues
Slow simulations often result from inefficient variance reduction, poor geometry organization, or inappropriate physics settings. MCNP provides tools to diagnose and optimize performance bottlenecks.
# Check memory usage
mcnp6 i=input.i o=output.o mx
# Profile particle tracking time
mcnp6 i=input.i o=output.o DBCN=1 | grep "time"
# Optimize importance sampling
mcnp6 i=input.i o=output.o WWP=5.0The mx option reports memory usage statistics, helping optimize task distribution and memory allocation. Debug output shows time spent in different parts of the calculation, identifying bottlenecks. Weight window parameters (WWP) can dramatically improve efficiency for deep penetration problems.
Best Practices for Production Runs
Always test your input with a small number of particles first to verify geometry and physics settings. Use appropriate checkpoint intervals based on your system's reliability and job time limits. Monitor memory usage and adjust task counts to avoid swapping.
Keep detailed records of simulation parameters, especially for parametric studies. Document any unusual results or convergence issues. Use version control for input files and maintain consistent naming conventions for output files.