Metadynamics (MTD; also abbreviated as METAD or MetaD) is a computer simulation method in computational physics, chemistry and biology. It is used to estimate the free energy and other state functions of a system, where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Alessandro Laio and Michele Parrinello in 2002[1] and is usually applied within molecular dynamics simulations. MTD closely resembles a number of newer methods such as adaptively biased molecular dynamics,[2] adaptive reaction coordinate forces[3] and local elevation umbrella sampling.[4] More recently, both the original and well-tempered metadynamics[5] were derived in the context of importance sampling and shown to be a special case of the adaptive biasing potential setting.[6] MTD is related to the Wang–Landau sampling.[7]
Metadynamics has been informally described as "filling the free energy wells with computational sand".[15] The algorithm assumes that the system can be described by a few collective variables (CV). During the simulation, the location of the system in the space determined by the collective variables is calculated and a positive Gaussian potential is added to the real energy landscape of the system. In this way the system is discouraged to come back to the previous point. During the evolution of the simulation, more and more Gaussians sum up, thus discouraging more and more the system to go back to its previous steps, until the system explores the full energy landscape—at this point the modified free energy becomes a constant as a function of the collective variables which is the reason for the collective variables to start fluctuating heavily. At this point the energy landscape can be recovered as the opposite of the sum of all Gaussians.
The time interval between the addition of two Gaussian functions, as well as the Gaussian height and Gaussian width, are tuned to optimize the ratio between accuracy and computational cost. By simply changing the size of the Gaussian, metadynamics can be fitted to yield very quickly a rough map of the energy landscape by using large Gaussians, or can be used for a finer grained description by using smaller Gaussians.[1] Usually, the well-tempered metadynamics[5] is used to change the Gaussian size adaptively. Also, the Gaussian width can be adapted with the adaptive Gaussian metadynamics.[16]
Metadynamics has the advantage, upon methods like adaptive umbrella sampling, of not requiring an initial estimate of the energy landscape to explore.[1] However, it is not trivial to choose proper collective variables for a complex simulation. Typically, it requires several trials to find a good set of collective variables, but there are several automatic procedures proposed: essential coordinates,[17]Sketch-Map,[18] and non-linear data-driven collective variables.[19]
Multi-replica approach
Independent metadynamics simulations (replicas) can be coupled together to improve usability and parallel performance. There are several such methods proposed: the multiple walker MTD,[20] the parallel tempering MTD,[21] the bias-exchange MTD,[22] and the collective-variable tempering MTD.[23] The last three are similar to the parallel tempering method and use replica exchanges to improve sampling. Typically, the Metropolis–Hastings algorithm is used for replica exchanges, but the infinite swapping[24] and Suwa-Todo[25] algorithms give better replica exchange rates.[26]
High-dimensional approach
Typical (single-replica) MTD simulations can include up to 3 CVs, even using the multi-replica approach, it is hard to exceed 8 CVs in practice. This limitation comes from the bias potential, constructed by adding Gaussian functions (kernels). It is a special case of the kernel density estimator (KDE). The number of required kernels, for a constant KDE accuracy, increases exponentially with the number of dimensions. So MTD simulation length has to increase exponentially with the number of CVs to maintain the same accuracy of the bias potential. Also, the bias potential, for fast evaluation, is typically approximated with a regular grid.[27] The required memory to store the grid increases exponentially with the number of dimensions (CVs) too.
A high-dimensional generalization of metadynamics is NN2B.[28] It is based on two machine learning algorithms: the nearest-neighbor density estimator (NNDE) and the artificial neural network (ANN). NNDE replaces KDE to estimate the updates of bias potential from short biased simulations, while ANN is used to approximate the resulting bias potential. ANN is a memory-efficient representation of high-dimensional functions, where derivatives (biasing forces) are effectively computed with the backpropagation algorithm.[28][29]
An alternative method, exploiting ANN for the adaptive bias potential, uses mean potential forces for the estimation.[30] This method is also a high-dimensional generalization of the Adaptive Biasing Force (ABF) method.[31] Additionally, the training of ANN is improved using Bayesian regularization,[32] and the error of approximation can be inferred by training an ensemble of ANNs.[30]
Developments since 2015
In 2015, White, Dama, and Voth introduced experiment-directed metadynamics, a method that allows for shaping molecular dynamics simulations to match a desired free energy surface. This technique guides the simulation towards conformations that align with experimental data, enhancing our understanding of complex molecular systems and their behavior.[33]
In 2020, an evolution of metadynamics was proposed, the on-the-fly probability enhanced sampling method (OPES),[34][35][36] which is now the method of choice of Michele Parrinello's research group.[37] The OPES method has only a few robust parameters, converges faster than metadynamics, and has a straightforward reweighting scheme.[38] In 2024, a replica-exchange variant of OPES was developed, named OneOPES,[39] designed to exploit a thermal gradient and multiple CVs to sample large biochemical systems with several degrees of freedom. This variant aims to address the challenge of describing such systems, where the numerous degrees of freedom are often difficult to capture with only a few CVs. OPES has been implemented in the PLUMED library since version 2.7.[40]
Algorithm
Assume we have a classical-particle system with positions at in the Cartesian coordinates. The particle interaction are described with a potential function . The potential function form (e.g. two local minima separated by a high-energy barrier) prevents an ergodic sampling with molecular dynamics or Monte Carlo methods.
Original metadynamics
A general idea of MTD is to enhance the system sampling by discouraging revisiting of sampled states. It is achieved by augmenting the system Hamiltonian with a bias potential :
.
The bias potential is a function of collective variables. A collective variable is a function of the particle positions . The bias potential is continuously updated by adding bias at rate , where is an instantaneous collective variable value at time :
.
At infinitely long simulation time , the accumulated bias potential converges to free energy with opposite sign (and irrelevant constant ):
For a computationally efficient implementation, the update process is discretised into time intervals ( denotes the floor function) and -function is replaced with a localized positive kernel function. The bias potential becomes a sum of the kernel functions centred at the instantaneous collective variable values at time :
The parameter , , and are determined a priori and kept constant during the simulation.
Implementation
Below there is a pseudocode of MTD base on molecular dynamics (MD), where and are the -particle system positions and velocities, respectively. The bias is updated every MD steps, and its contribution to the system forces is .
set initial and setevery MD step:
compute CV values:
every MD steps:
update bias potential:
compute atomic forces:
propagate and by
Free energy estimator
The finite size of the kernel makes the bias potential to fluctuate around a mean value. A converged free energy can be obtained by averaging the bias potential. The averaging is started from , when the motion along the collective variable becomes diffusive:
^Hansen, H.S.; Hünenberger, P.H. (2010). "Using the local elevation method to construct optimized umbrella sampling potentials: Calculation of the relative free energies and interconversion barriers of glucopyranose ring conformers in water". J. Comput. Chem. 31 (1): 1–23. doi:10.1002/jcc.21253. PMID19412904. S2CID7367058.
^Christoph Junghans, Danny Perez, and Thomas Vogel. "Molecular Dynamics in the Multicanonical Ensemble: Equivalence of Wang–Landau Sampling, Statistical Temperature Molecular Dynamics, and Metadynamics." Journal of Chemical Theory and Computation 10.5 (2014): 1843-1847. doi:10.1021/ct500077d
^Levy, A.V.; Montalvo, A. (1985). "The Tunneling Algorithm for the Global Minimization of Functions". SIAM J. Sci. Stat. Comput. 6: 15–29. doi:10.1137/0906002.
^Branduardi, Davide; Bussi, Giovanni; Parrinello, Michele (2012-06-04). "Metadynamics with Adaptive Gaussians". Journal of Chemical Theory and Computation. 8 (7): 2247–2254. arXiv:1205.4300. doi:10.1021/ct3002464. PMID26588957. S2CID20002793.
^Spiwok, V.; Lipovová, P.; Králová, B. (2007). "Metadynamics in essential coordinates: free energy simulation of conformational changes". The Journal of Physical Chemistry B. 111 (12): 3073–3076. doi:10.1021/jp068587c. PMID17388445.
^Raiteri, Paolo; Laio, Alessandro; Gervasio, Francesco Luigi; Micheletti, Cristian; Parrinello, Michele (2005-10-28). "Efficient Reconstruction of Complex Free Energy Landscapes by Multiple Walkers Metadynamics †". The Journal of Physical Chemistry B. 110 (8): 3533–3539. doi:10.1021/jp054359r. PMID16494409. S2CID15595613.
^Bussi, Giovanni; Gervasio, Francesco Luigi; Laio, Alessandro; Parrinello, Michele (October 2006). "Free-Energy Landscape for β Hairpin Folding from Combined Parallel Tempering and Metadynamics". Journal of the American Chemical Society. 128 (41): 13435–13441. doi:10.1021/ja062463w. PMID17031956.
^Galvelis, Raimondas; Sugita, Yuji (2015-07-15). "Replica state exchange metadynamics for improving the convergence of free energy estimates". Journal of Computational Chemistry. 36 (19): 1446–1455. doi:10.1002/jcc.23945. ISSN1096-987X. PMID25990969. S2CID19101602.
^ abGalvelis, Raimondas; Sugita, Yuji (2017-06-13). "Neural Network and Nearest Neighbor Algorithms for Enhancing Sampling of Molecular Dynamics". Journal of Chemical Theory and Computation. 13 (6): 2489–2500. doi:10.1021/acs.jctc.7b00188. ISSN1549-9618. PMID28437616.
^ abZhang, Linfeng; Wang, Han; E, Weinan (2017-12-09). "Reinforced dynamics for enhanced sampling in large atomic and molecular systems. I. Basic Methodology". The Journal of Chemical Physics. 148 (12): 124113. arXiv:1712.03461. doi:10.1063/1.5019675. PMID29604808. S2CID4552400.
^White, Andrew D.; Dama, James F.; Voth, Gregory A. (2015). "Designing Free Energy Surfaces That Match Experimental Data with Metadynamics". Journal of Chemical Theory and Computation. 11 (6): 2451–2460. doi:10.1021/acs.jctc.5b00178. OSTI1329576. PMID26575545.
^Ensing, B.; De Vivo, M.; Liu, Z.; Moore, P.; Klein, M. (2006). "Metadynamics as a tool for exploring free energy landscapes of chemical reactions". Accounts of Chemical Research. 39 (2): 73–81. doi:10.1021/ar040198i. PMID16489726.
^Gervasio, F.; Laio, A.; Parrinello, M. (2005). "Flexible docking in solution using metadynamics". Journal of the American Chemical Society. 127 (8): 2600–2607. doi:10.1021/ja0445950. PMID15725015. S2CID6304388.