Documentation | Examples | Paper
MLMOD is a Python/C++ machine learning simulation package that integrates data-driven models directly into LAMMPS molecular dynamics simulations. It enables learning data-driven models for particle dynamics, forces, mobility tensors, and other quantities of interest using trained ML approaches β including neural networks, Gaussian process regression, and other PyTorch-compatible architectures.
Traditional molecular dynamics (MD) simulations use hand-crafted analytical models for interparticle forces and dynamics. MLMOD extends LAMMPS to allow these components to be replaced or augmented by machine-learned models trained from data or derived from physical principles.
| Capability | Description |
|---|---|
| ML Dynamics Integrators | Replace the LAMMPs integrator with a learned map |
| ML Force Fields | Compute interparticle forces via a learned model |
| ML Mobility Tensors | Hydrodynamic coupling via |
| Quantities of Interest | On-the-fly observables |
| MPI Parallelism | Large-scale simulations with MPI-parallel LAMMPS |
Models are defined in PyTorch, traced with torch.jit.trace, exported to .pt format, and loaded at runtime by the MLMOD C++ extension to LAMMPS. The LAMMPS engine handles spatial decomposition, neighbor lists, and I/O while MLMOD handles model evaluation.
P.J. Atzberger, MLMOD Package: Machine Learning Methods for Data-Driven Modeling in LAMMPS, Journal of Open Source Software, 8(89), 5620, (2023). doi:10.21105/joss.05620
The quick_install.py script attempts to detect your platform and install the appropriate pre-built binary:
python quick_install.pyDownload the appropriate wheel for your platform and install with pip:
| Platform | Wheel |
|---|---|
| π§ Linux Debian 9+ / Ubuntu (standard) | mlmod_lammps-1.0.3-py3-none-manylinux_2_24_x86_64.whl |
| π§ Linux Debian 9+ / Ubuntu (flexible) | mlmod_lammps-1.0.3-py3-none-any.whl |
Pre-built binaries are currently available for Debian 9+/Ubuntu and CentOS 7+, with Python 3.6+.
Download from: https://web.math.ucsb.edu/~atzberg/mlmod/distr/
pip install -U mlmod_lammps-1.0.3-py3-none-manylinux_2_24_x86_64.whlOr install directly from the URL:
pip install -U https://web.math.ucsb.edu/~atzberg/mlmod/distr/mlmod_lammps-1.0.3-py3-none-manylinux_2_24_x86_64.whlpip install -r requirements.txtDependencies:
numpy >= 1.21.1mlmod-lammps(pre-built wheel, see above)torch >= 1.11.0(optional β needed for model generation, not for running simulations)
For desktop platforms, use a Docker container with a standard Ubuntu base:
docker run -it ubuntu:20.04 /bin/bash
apt update && apt install python3-pip
pip3 install mlmod_lammps-1.0.3-py3-none-any.whlOr use the pre-installed Anaconda image:
docker run -it atzberg/ubuntu_20_04_anaconda1 /bin/bash
conda activate mlmod-lammpspython -c "from mlmod_lammps.tests import t1; t1.test()"For full build-from-source instructions, see the documentation pages.
The general workflow in MLMOD is:
- Define and train a PyTorch model for dynamics, forces, mobility, or QoI.
- Export the model to
.ptformat usingtorch.jit.trace. - Run a LAMMPS simulation that loads the
.ptmodel via the MLMOD plugin.
from mlmod_lammps.lammps import lammps
import mlmod_lammps.util as m_util
L = lammps()
Lc = m_util.wrap_L(L, m_util.Lc_print)
# Standard LAMMPS setup commands
Lc("units nano")
Lc("atom_style angle")
Lc("region mybox prism -18 18 -18 18 -18 18 0 0 0")
Lc("boundary p p p")
Lc("create_box 1 mybox")
# ... add atoms, define fixes using the mlmod model ...All examples are in the examples/ folder. Each example has a gen_mlmod_*.py script to generate the PyTorch model and a run_sim_*.py (or .ipynb) script to run the LAMMPS simulation. A run_full.sh convenience script runs both steps.
Replaces the standard LAMMPS integrator with a learned map:
β Generate the PyTorch model:
cd examples/dynamics1
python gen_mlmod_dynamics1.pyThis defines a torch.nn.Module that accepts a concatenated state vector [X, V, F, Type] and returns updated positions and velocities. The model is traced and saved to output/gen_mlmod_dynamics1/gen_001/dyn1_dynamics1.pt.
β‘ Run the simulation:
python run_sim_dynamics1.pyThe simulation uses LAMMPS with the MLMOD fix to call the exported model at each timestep. Output is written to VTK files for visualization.
Uses a learned force model
β Generate the PyTorch model:
cd examples/force1
python gen_mlmod_force1.pyThe model is a torch.nn.Module that takes the concatenated system state and returns per-atom force vectors. Saved to output/gen_mlmod_force1/gen_001/F_force1.pt.
β‘ Run the simulation:
python run_sim_force1.pyModels overdamped Brownian dynamics of particles coupled through a hydrodynamic mobility tensor
Two mobility models are provided:
- π΅ Oseen tensor (
gen_mlmod_oseen1.py):$M_{ij}$ computed from pairwise distances. - π’ Rotne-Prager-Yamakawa (RPY) tensor (
gen_mlmod_rpy1.py): Regularized version valid at short range.
β Generate the model:
cd examples/particles1
python gen_mlmod_oseen1.py # or gen_mlmod_rpy1.pyThis generates both the diagonal (.pt models.
β‘ Run the simulation:
python run_sim_particles1.pyA Jupyter notebook version is also available:
jupyter notebook run_sim_particles1.ipynbSwitch between Oseen and RPY by editing the model_case variable at the top of the script:
model_case = 'rpy1' # or 'oseen1'Computes observable quantities
cd examples/qoi1
python gen_mlmod_qoi1.py
python run_sim_qoi1.pyRuns a force-field simulation using MPI parallelism across multiple processes.
β Generate the model:
cd examples/mpi1
python gen_mlmod_force1.pyβ‘ Run with MPI:
mpirun -n 4 python mpi_force1.py
β οΈ Requiresmpi4pyand an MPI-compiled build of MLMOD/LAMMPS. See the documentation for build instructions.
Simulation output is saved as VTK files in the output/ directory. These can be visualized with ParaView. Each example includes a vis_pv1.py script and vis_pv1.sh convenience launcher.
Any PyTorch model that can be traced with torch.jit.trace can be used. The general pattern is:
import torch
class MyForceModel(torch.nn.Module):
def forward(self, z):
# z is a flat column vector: [X; V; F; Type] for all atoms
# reshape, compute, and return force vector of shape (num_atoms*num_dim, 1)
...
return forces
model = MyForceModel()
traced = torch.jit.trace(model, torch.zeros((input_size, 1)))
traced.save("my_model.pt")The mask_input string (e.g., "X V F Type") controls which state quantities are concatenated and passed to the model.
mlmod/
βββ examples/
β βββ dynamics1/ # ML time-step integrator example
β βββ force1/ # ML force field example
β βββ particles1/ # ML mobility tensor (Oseen/RPY) example
β βββ qoi1/ # Quantities of interest example
β βββ mpi1/ # MPI parallel simulation example
β βββ ... # Other examples
βββ src/ # C++ source for LAMMPS plugin
βββ tests/ # Package tests
βββ doc/ # Documentation source
βββ paper/ # JOSS paper
βββ requirements.txt
βββ quick_install.py
If you use MLMOD in your research, please cite:
@article{mlmod_atzberger,
author = {Paul J. Atzberger},
journal = {Journal of Open Source Software},
title = {MLMOD: Machine Learning Methods for Data-Driven Modeling in LAMMPS},
year = {2023},
publisher = {The Open Journal},
volume = {8},
number = {89},
pages = {5620},
doi = {10.21105/joss.05620},
url = {https://doi.org/10.21105/joss.05620}
}| Resource | Link |
|---|---|
| π¬ Video overview | YouTube |
| π Documentation | web.math.ucsb.edu/~atzberg/mlmod/docs |
| π¬ Mailing list (updates & releases) | Sign up |
| π Bug reports | Submit here |
| π£ Usage / citation reporting | Submit here |
Support from NSF Grant DMS-2306101, NSF Grant DMS-1616353, and DOE Grant ASCR PHILMS DE-SC0019246 is gratefully acknowledged. More recent documentation and development used assistance from AI Anthropic Claude 3.6 Sonnet. The core algorithms and mathematical frameworks in this package were designed and manually implemented by the authors.
