GPUMD Versions Save

Graphics Processing Units Molecular Dynamics

v3.9.3

1 week ago
  • Introduced the ensemble ti_rs keyword to enable nonequilibrium free-energy calculations along an isobaric path.
  • Introduced the ensemble ti_as keyword to enable nonequilibrium free-energy calculations along an isothermal path.
  • Improved automation for computing absolute Gibbs free energy with ensemble ti_spring, eliminating the need for post-processing.
  • Improved automation for calculating spring constants with ensemble ti_spring.
  • Fixed a memory bug introduced in GPUMD-v3.9.2 for compute_shc.
  • Improved the all-group option for compute_shc.

v3.9.2

1 month ago
  • New features

    • ensemble ti: This keyword is used to set up an equilibrium thermodynamic integration integrator. It is for testing purpose and its only differece from ti_spring keyword is that the lambda value is fixed instead of changing.
    • ensemble nphug: This keyword sets up a Hugoniot thermostat integrator.
    • ensemble mirror: This keyword is employed to configure a momentum mirror shock wave simulation, where atoms are deflected by a moving momentum mirror to generate a shock wave.
    • ensemble piston: This keyword is employed to configure a piston shock wave simulation, where a fixed wall of atoms is displaced at a specified velocity to generate a shock wave.
    • dump_piston: Piston simulations commonly involve millions of atoms. Dumping all the virial and velocity data for each atom can lead to excessively large output files, making data processing cumbersome. The dump_piston command addresses this by calculating spatial thermo information during the simulation.
    • dump_dipole: Predicts the dipole on the fly using a TNEP model, see https://doi.org/10.1021/acs.jctc.3c01343
    • dump_polarizability: Predicts the polarizability on the fly using a TNEP model, see https://doi.org/10.1021/acs.jctc.3c01343
  • Enhancements and changes

    • Changed the default values of basis_size for NEP training from (12, 12) to (8, 8).
    • Improved the default regularization methods.
    • Added stress_train.out and stress_test.out during NEP training
    • Added option for compute_shc to calculate the SHC for all the groups in a grouping method simultaneously.

v3.9.1

6 months ago
  • Fixed a few bugs related to MTTK integrator and replicate, see #523 for details
  • Improved the documentation

v3.9

6 months ago
  • New features:
    • A new keyword electron_stop in run.in to apply electron stopping.
    • A new keyword compte_rdf in run.in to compute the radial distribution function (RDF).
    • A new keyword mc in run.in to perform efficient MCMD simulations in canonical, semi-grand canonical, and variance-constraint semi-grand canonical ensembles with a NEP model (only).
    • A new keyword dftd3 in run.in to add the D3 dispersion correction to a NEP model (only).
    • A new keyword replicate in run.in to replicate the initial model.
    • A set of new ensemble options npt_mttk, nph_mttk, and nvt_mttk.
    • A new keyword ensemble ti_spring for free-energy calculations using the nonequilibrium thermodynamic integration method.
    • A new keyword ensemble msst for multi-scale shock technique (MSST) simulations.
    • A new keyword compute_lsqt to compute the electronic transport properties by coupling MD and linear scaling quantum transport (LSQT).
    • A new option fire for the keyword minimize in run.in.
    • A new option has_potential for the keyword dump_exyz in run.in.

v3.8

11 months ago
  • Bugfix:
    • The target temperature in ensemble nvt_lan keeps to be the initial one, not changing linearly from the initial one to the final one.
  • New features:
    • Added the model_type keyword in nep.in, which can be set to 0, 1, or 2, corresponding to NEP models for potential, dipole, and polarizability, respectively. Tutorials for training dipole and polarizability NEP models are created.
    • Added path-integral molecular dynamics (PIMD) and related techniques such as ring-polymer molecular dynamics (RPMD) and thermostatted RPMD (TRPMD). The new keywords include ensemble pimd, ensemble rpmd, ensemble trpmd, and dump_beads.
    • Added the homogeneous non-equilibrium molecular dynamics Evans-Cummings (HNEMDEC) mehod for calculating thermal conductivity and related transport coefficients in multicomponent systems. The new keyword is compute_hnemdec and the related output file is onsager.out.
    • Added an option to output velocity in NetCDF trajectory file. See the updated dump_netcdf command.

v3.7

1 year ago
  • Bugfix:
    • fixed a bug for the plumed interface #395
    • fixed some bugs for the prediction mode in nep.in. #391 #366
  • Enhancement
    • Changed the default of basis_size from 8 8 to 12 12 #402
    • Documented the plumed interface #394
    • Sped up NEP training with fullbatch #388
    • Improved the virial_train.out and virial_test.out file format: they have 12 instead of 2 columns now. #399
    • Warned for too small energy values in the training/test data sets #378
  • New features:
    • added an active-learning scheme for NEP training #396
    • Added an option to speed up MD simulations with NEP using tabulated radial functions #392

v3.6

1 year ago
  • Bugfix:
    • fixed a bug for the plumed interface when the number of atoms exceeds 1024 #339
    • fixed a filesize mismatch bug #310
  • Enhancement
    • Can train NEP models up to 103 elements now #334
    • NEP training now converges much faster than before for many-element systems #361
  • New features:
    • added the compute_msd keyword for mean-square displacement calculation #324
    • added the compute_viscosity keyword for viscosity calculation #325
    • added the dump_observer keyword for running MD with the average of an ensemble of NEP models or reporting model uncertainty #332
    • added the lambda_shear keyword in nep.in #320

v3.5

1 year ago
  • Added
    • NEP4, which can be chosen by using the version keyword in nep.in. It is now the default version for NEP training as it is much better than NEP2 and NEP3 for multi-component systems. #277
    • An option to use input stress data (will be converted to virial) for NEP training. #278
  • Fixed some bugs:
    • One related to variable time steps. #271
    • One related to partition direction choice for multi-GPU MD with NEP. #270
    • One related to the report of target virial values in NEP training. #283
    • One with diffusion coefficient calculations when the product of the number of atoms and the number of time origins exceeds INT_MAX. #276
    • One with NEP heat current calculation for "small boxes". #274

v3.4.1

1 year ago
  • Fixed a small bug for the multi-GPU version of NEP about partition direction. #270
  • Fixed a small bug for variable time step. #271
  • Fixed the heat current for small-box MD with NEP #274
  • Moved NEP_CPU to a separate repo (https://github.com/brucefan1983/NEP_CPU), where a NEP-LAMMPS interface is also created.

v3.4

1 year ago
  • Removed
    • Removed a few empirical potentials (Vashishta, SW, REBO-LJ, and Buckingham-Coulomb) and removed the hybrid-potential scheme. For those who want to use these features, GPUMD-v3.3.1 will be the best choice.
    • There will not be a list of atom types (integers) after the potential filename for the potential keyword.
    • There will be no support for "hybrid" potentials. One should write the potential keyword once and only once in run.in.
    • Removed the neighbor keyword in run.in. The code will build the neighbor list at each step with the potential cutoff. The user does not need to estimate a neighbor list size now. For NEP models, the information of neighbor list size will be recorded into nep.txt. For empirical potentials, the code has chosen reasonable neighbor list sizes.
    • Removed the so-called "driver input file" for both the gpumd and nep executables.
  • Changed
  • Added
    • Added multi-GPU (single-node) support for both training and MD simulation with the NEP model. The code will use all the GPUs available. The available GPUs can be set by running e.g. export CUDA_VISIBLE_DEVICES=0,1 (0 and 1 are the IDs of the GPUs to be used) in the command line or by specifying something in a job-submission script in a cluster.