Classical Monte Carlo Simulations for the 2D Ising Model and Wolff Algorithm
1. Classical Monte Carlo Simulations for the 2D Ising Model
Model
The Hamiltonian of Ising model is written as
where sums over all nearest-neighbor pairs on the lattice and each site has a spin variable that can take two values: +1 (representing an up spin ) or -1 (representing a down spin ). The behavior of the model is determined by the competition between the interaction energy and the thermal fluctuations. At low temperatures, the system will exhibit ferromagnetic order, where all spins align in the same direction. At high temperatures, the system will exhibit disordered behavior, where the spins are randomly aligned. Therefore, it is expected that there may exist a finite-temperature phase transition in this model. This classical model despite its simpleness provides us with profound insights into the behavior of magnets through its concise mathematical structure and important physical properties.
Markov chain Monte Carlo simulation
Given a discrete probability distribution , the expectation value of the observable is given by
However, this may be expensive or even non-available when the value range of is huge.
One alternative way to address it is sampling, which gives an approximate estimation of . To be specific, if we can generate different samples (configurations) of $x$ according to their corresponding probabilities, and sample for times, then the weak law of large numbers guarantees that
where is our th sample.
The Monte Carlo (MC) simulation is basically to generate different samples on computer with some pseudo random number generator and calculate the observable. One of the popular ways to generate samples is to use the Markov chain, and this kind of MC is also called the Markov chain Monte Carlo (MCMC).
Mathematically, the Markov chain means the ()th sample only depends on the th sample and some random term for generating samples, i.e.
the master equation for this stochastic process is thus
where denote the probability that we obtain as the th sample and
is the conditional probability to transfer from to . If we can expect a equilibrium on the Markov chain, i.e. then one brutal restriction is to require
This constraint is called the detailed balance condition. If this can be satisfied, then the equilibrium must exists.
There are many ways to satisfy it, and we consider the Metropolis-Hasting algorithm here. First we rewrite
where is a defined trial probability. is called the acceptance probability and is defined as
If , one can check that the detailed balance condition is naturally satisfied. Otherwise, we should further determine the functional form of $T(x’|x)$. As we will see later, in our simulation for the Ising model, we can make throughout the simulations, therefore
Another important thing for a MCMC scheme is the ergodicity, i.e. we have to make sure that for any possible , we can sample it from the Markov chain in a finte (discrete) time . However, this may not be easy to prove in some cases.
MCMC on the Ising model
With MCMC, we can now calculate the expectation values of observables such as energy of the Ising model by sampling different configurations (classical states). At the equilibrium, the probability distribution for different configurations obeys the Boltzmann distribution
where is the Boltzmann constant and is the energy corresponding to . Suppose we have a configuration at time , then to generate a new configuration , we randomly choose one spin and flip it. Therefore, is satisfied and for and , the acceptance probability reduces to
where . It means that if the new configuration has a lower energy (), then we accept it with 100% probability to assign . Otherwise, we accept it with probability equal to . It is straightforward that the single-flip strategy enable us to reach every possible configuration and the ergodicity can be saitisfied.
Practically, we have to assign an initial configuration, which could be or any random one. In low-temperature case, the initial state configuration may be some high-level excited state, which has small probability to appear statistically. Consequently, if we do measurements at the beginning, it will introduce errors since the initial states are out of equilibrium. Therefore, we first take steps for thermalizing the system, then do measurements for times.
Observables
To investigate the property of the Ising model, several observables are measured. In the case of the ferromagnetic Ising model,the most important quantity to calculate is the magnetization. It should be computed using all the spins to take advantage of self-averaging to improve the statistics,which is
At low temperature, two spin configurations and lead to and respectively. Therefore we can take to detect the phase transition.
Another observable is magnetic susceptibility, which describes the response of a material to an external magnetic field, and it measures the degree of magnetization of a material under the action of a magnetic field. The magnetic susceptibility is usually indicated by the symbol , which is defined as
Magnetic susceptibility plays an important role in material research and application. By measuring the magnetic susceptibility, we can understand the magnetic properties, phase transition behavior and magnetic permeability of the material. Magnetic susceptibility is also closely related to physical phenomena such as magnetic interaction, spin structure and magnetic field effect of materials.
Binder ratio is a dimensionless physical quantity used to characterize scale invariance in phase transitions. Near the critical point of the phase transition, it can be used to measure the statistical characteristics of the magnetization of systems of different sizes. It can be defined as
where is the second moment representing the magnetization and is the fourth moment of magnetization. At , the power-laws cancel out, and the ratio is L-independent (and also universal), up to sub-leading finite-size corrections.
The Binder ratio can be used to determine whether the system has undergone a phase transition near the phase transition point. When the system is in an ordered phase, the Binder ratio tends to 1, and when the system is in a disordered phase, the Binder ratio tends to some constant (usually much less than 1). By calculating the Binder ratio of a system of different sizes and observing whether it converges, it is possible to determine whether the system has a phase transition.
Binder Cumulant is a statistical physical quantity used to study the behavior of phase transitions, often used to describe isotropic magnetic systems or other phase transitions. This physical quantity is proposed in order to determine whether a phase transition occurs in a system and to estimate the position of the phase transition point in numerical simulation.In the case of a scalar order parameter (e.g., for the Ising model), the Binder cumulant is defined as
The main characteristic of the Binder cumulant is that it tends to a constant near the point of the phase transition and to a different value in the non-phase transition region. By observing the change of Binder cumulants with temperature or other external parameters, the phase transition points can be well determined.
2. Program
The following is the program of the classical monte carlo simulations for the 2D Ising Model in fortran 90.
1 | !--------------------------------------------------! |
3. Wolff Algorithm
The Metropolis algorithm with single spin inversion will suffer from critical slowing down when processing near the critical point. The critical region will form a large number of clusters or magnetic domains and the acceptance probability is very small. Effective inversion will only occur when picking to the boundary, so the correlation length between the two sampled configurations will be very large.
Therefore we use the classic cluster algorithm, wolff algorithm, to solve these problems. It works in the following way.
- Choose a random spin as the initial cluster.
- If a neighboring spin is parallel to the initial spin it will be added to the cluster with probability .
- Repeat this step for all points newly added to the cluster and repeat this procedure until no new points can be added.
- Perform measurements using improved estimators.
- Flip all spins in the cluster.
For the program, the only difference between the wolff algorithm and the single spin flip algorithm is the subroutine update. The subroutine update is modified as follows.
1 | !--------------------------------------------------! |