previous (Ornstein-Zernike equation and closure relations)

theory index

next (interaction potentials)

This page describes the algorithm that is used in the pyOZ for the solution of the Ornstein-Zernike equation. It concerns only the “heart” of the program – all information about input and output handling and similar things is skipped.

Before discussing the algorithm itself, there are some things that have to be clarified first. We have the multicomponent Ornstein-Zernike equation

In Fourier space (using the convolution theorem)

All correlation functions are discretized on an evenly-spaced grid in both real and Fourier space. That means, that we actually have a set of P (number of discretization points) matrix equations with dimensionality of (ncomponents x ncomponents). All matrices are square and symmetric since the pair additivity is assumed for the interaction potentials.

In order to get rid of the density, we multiply with to get

and recognize, that the sum is a definition of matrix multiplication. We then define new functions, that have the density prefactor included, e.g.,

and write everything in the matrix form

Solving this matrix equation for all discretization steps and subtracting the direct correlation function in the Fourier space yields the indirect correlation function in the Fourier space as well.

**E** in the previous equation is an identity matrix. Division by the density factor yields . Its inverse Fourier transformation provides the Γ function in real space, where it can be used for further calculation, as will be described below.

The whole algorithm consists of repeating several steps until convergence is found. This is the most simple version of the calculation procedure. pyOZ currently exactly follows these steps. This might change in the future. In such a case, the change will be noted here and the exact procedure will be thoroughly mathematically discussed in the Theory section. (The handling of density prefactors is meanwhile done transparently by the Fourier transforms themselves, which are now in a certain way normalized by these constants)

- set starting Gamma function – use either zeros or Gamma function from previous calculation
- calculate the direct correlation function
**c**in real space using selected closure relation and Γ - Fourier transform the direct correlation function to Fourier space
we are using the Fourier-sinus transformation and the operation can be described as (DFT and other normalization factors omitted)

where DST is the discrete sine transform, k is reciprocal distance and Δr is the step size in real space. Then, multiply with and get the final in Fourier space

- solve the matrix problem in Fourier space and get – Gamma function in Fourier space with density prefactor applied
- divide by to get – the Gamma function in Fourier space without the density prefactor applied
- Fourier transform the Gamma function to real space using the inverse discrete sine transform (DFT and other normalization factors omitted)
where iDST is the inverse discrete sine transform, r is distance, and Δk is the step size in real space

- check the convergence (DSQN) of Γ
DSQN is normalized to the number of discretization points (N) and the number of different combinations of interacting “ncomponents” particles – note that Γ has dimensions of (ncomp x ncomp x N).

- if not converged get new Γ – currently it is calculated by mixing old (from step 2) and new values (from step 6) using a mixing parameter α – more sophisticated approaches exist
return to step 2 and repeat the cycle

- if convergence is achieved, then finish the calculation

Newton-Raphson/conjugate gradients (NR/CG) modification of the algorithm (*J. Comp. Phys* **1985**, *61*, 280-285; *J. Chem. Phys* **1988**, *88*, 5143)

The Picard iteration (mixing) is generally not very stable and requires many iterations in order to reach convergence. We are looking for another possibility how to obtain better estimate of the final function.

One of these possibilities is the use of the so called Newton-Raphson (NR) procedure. NR can be used for finding the root of a function by linear interpolation by using a derivative at the trial point

and finding the new value by setting . In the case of a linear function, this procedure leads directly to the solution. In the case of a nonlinear function, this procedure has to be iterated. For more details check for example this page.

The situation is indeed more complicated in case of multidimensional problem. There we have

In matrix form

The NR method is then written as

Let’s define the Jacobian matrix

The whole method can be then written

Neglecting the second order terms and setting the function value at the new set of trial points to zero (as was done for one dimensional case) leads to

The new improved set of trial points can be then obtained via

This step is quite problematic, since the Jacobian matrix is usually very large and it is not possible to invert it efficiently. A possible way how to solve the equation (albeit approximatively) without the need to invert **J** – iterative non-symmetric conjugate gradients method – will be shown later.

We now get back to the Ornstein-Zernike equation. Let’s formulate it slightly differently:

or

In this case, 1 is the identity matrix (ones on the diagonal) and S is the so called partial structure factor matrix. The convergence of the the normal cycle, described above, is achieved when the input and output gamma functions are the same:

We start the calculation with some other function, very similar to the original one. As a result, we will obtain

The last term is linear in , coupled via a complicated linear operator. Rearrangement yields

which is a system of linear equations of the form AX=B, which is soluble by appropriate methods.

We have to show now, what does the operator A’ look like. We have to linearize the OZ equation and the closure. For HNC, the derivation leads to

whereas for PY, we obtain

where h is the original total correlation function without the density prefactors applied and M is the respective Mayer function. In the following text (in order to make the discussion of the two closures easier), the respective factor multiplying the will be called .

Fourier transformation is a linear operator, therefore (we assume that the forward FT takes care of the density prefactor – it is “normalized” by it; as a result, we have dimensionless correlation functions in Fourier space):

Differentiation of the OZ equation yields

We need to do some straightforward manipulations now:

Further rearrangement yields

Please note, that this expression involves matricial products (as opposed to all steps performed so far, which alway involve only functions for the same combination “ij”.

The new gamma function is calculated easily, the small change as well:

The last expression is valid only when the correct form of multiplication is used (matrix product!).

The inverse Fourier transform is linear as well (we assume that it takes care of the removal of the density prefactor):

Putting all these steps together, we can write the total form of the operator A’:

The total operator A is then defined as A = 1 – A’.

As was noted above, the inversion of the Jacobian matrix needed for the solution of the problem AX=B using the Newton-Raphson method is computationally prohibitive. Therefore, we are using iterative method, which allows for quite fast determination of the solution – the so called conjugate gradients method, in its nonsymmetric, iterative form. We are repeating the following procedure until the remainders

are zero. The following definitions are used in the process:

where (X,Y) denotes the inner product

and is the adjoint of the operator A such as that

To find the adjoint of A we note that it can be ‘separated’ into several substeps (as shown above)

and that (standard property of operator adjoint);

We also assume, that the Fourier transform is self adjoint. It doesn’t have to be for approximate discrete transforms, but the tests seem to show, that this assumption is correct in the case of the FT employed in pyOZ. It has to be noted, that the ability of this method to converge to the result is very dependent on the quality of the adjoint. Anyway, the adjoint of the operator A is

where the products involving the partial structure factors are done matrix-wise and the other products element-wise.

For the first iteration, is set to a fraction of B (in case of pyOZ, the default is zero). Also, and .

The whole modified algorithm is very similar to the original. The whole calculation of the gamma function is the same, the difference comes at the moment when gamma for new iteration is calculated.

- set starting Gamma function – use either zeros or Gamma function from previous calculation
- calculate the direct correlation function
**c**in real space using selected closure relation and Γ - Fourier transform the direct correlation function to Fourier space
- solve the matrix problem in Fourier space and get – Gamma function in Fourier space with density prefactor applied
- Fourier transform the Gamma function to real space using the inverse discrete sine transform
- check the convergence (DSQN) of Γ as defined above – comparing the old and new gamma
- if not converged, we use the Newton-Raphson/conjugate gradients procedure as described earlier. We stop when the remainder R has converged (using DSQN(R,0)) to the desired accuracy, given as a fraction of the DSQN of the main (outer) cycle. In this way, we avoid unnecessary iterations in the beginning when DSQN is large.
- the new gamma is then calculated by

then the algorithm returns to step 2 and another cycle is started - if convergence is achieved, then finish the calculation

previous (Ornstein-Zernike equation and closure relations)

theory index

next (interaction potentials)