Algorithms of the OpenMPS

In this section we provide some details on the algorithms used by OSMPS in order to give the user some understanding of the available convergence parameters. The reader interested in a broader view of MPSs and their algorithms should consult Ulrich Schollwoeck’s review The density-matrix renormalization group in the age of matrix product states, which is the standard reference on the subject at the time of writing of this manual [Schollwock11].

Definitions

We define a tensor as a map from a product of Hilbert spaces to the complex numbers

T:\mathbb{H}_1\otimes \mathbb{H}_2\otimes \dots \otimes \mathbb{H}_r\to\mathbb{C}\, .

Here r is the rank of the tensor. If we evaluate the elements of the tensor T in a fixed basis \left\{|i_j\rangle \right\} for each Hilbert space \mathbb{H}_j, then equivalent information is carried in the multidimensional array T_{i_1\dots i_r}. We will also refer to this multidimensional array as a tensor. The information carried in a tensor does not change if we change the order in which its indices appear. We will call such a generalized transposition a permutation of the tensor. As an example, the permutations of the rank-3 tensor T are

T_{ijk}&=\left[T'\right]_{kij}=\left[T''\right]_{jki}=\left[T'''\right]_{jik}=\left[T''''\right]_{kji}=\left[T'''''\right]_{ikj}\, .

Here, the primes indicate that the tensor differs from its unprimed counterpart only by a permutation of indices. Similarly, by combining two such indices together using the Kronecker product we can define an equivalent tensor of lower rank, a process we call index fusion. We denote the Kronecker product of two indices a and b using parentheses as \left(ab\right), and a representation is provided by

\left(ab\right)&=\left(a-1\right)d_b+b\, ,

where d_b is the dimension of \mathbb{H}_b and a and b are both indexed starting from 1. An example of fusion is

T_{ijk}&=\left[T'\right]_{i\left(jk\right)}\, .

Here, T is a rank-3 tensor of dimension d_i\times d_j\times d_k and T' is a matrix of dimension d_i\times d_jd_k. The inverse operation of fusion, which involves creating a tensor of higher rank by splitting a composite index, we refer to as index splitting.

Just as permutations generalize the notion of matrix transposition, tensor contraction generalizes the notion of matrix multiplication. In a contraction of two tensors A and B some set of indices \mathbf{c}_A and \mathbf{c}_B which describe a common Hilbert space are summed, and the resulting tensor C consists of products of the elements of A and B as

(7)C_{\bar{\mathbf{c}}_A\bar{\mathbf{c}}_B}&=\sum_{\mathbf{c}}A_{\bar{\mathbf{c}}_A\mathbf{c}}B_{\mathbf{c}\bar{\mathbf{c}}_B}\, .

Here \bar{\mathbf{c}}_B denotes the indices of A which are not contracted and likewise for \bar{\mathbf{c}}_B. The rank of C is r_A+r_B-2n_c, where n_c is the number of indices contracted (i.e., the number of indices in \mathbf{c}) and r_A and r_B are the ranks of A and B, respectively. In writing expression Eq. (7) we have permuted all of the indices \mathbf{c}_A to be contracted to the furthest right position in A and the indices \mathbf{c}_B to the furthest leftmost position in B for notational simplicity.

Tensor operations

Fig. 1 Examples of basic tensor operations in diagrammatic notation. a) A rank-3 tensor. b) The conjugate of a rank-3 tensor. c) The contraction of two rank-3 tensors over a single index produces a rank-4 tensor.

At this stage, it is advantageous to develop a graphical notation for tensors and their operations [SDV06]. A tensor is represented graphically by a box with lines extending upwards from it. The number of lines is equal to the rank of the tensor. The order of the indices from left to right is the same as the ordering of lines from left to right. A contraction of two tensors is represented by a line connecting two points. Finally, the complex conjugate of a tensor is denoted by a point with lines extending downwards. Some basic tensor operations are shown in graphical notation in Fig. 1.

Following a similar line of reasoning as for contractions above, we may also decompose tensors into contractions of tensors using permutation, fusion, and any of the well-known matrix decompositions such as the singular value decomposition (SVD) or the QR decomposition. For example, a rank-3 tensor T can be factorized as

T_{ijk}&=\sum_lU_{\left(ij\right)l}S_lV_{lk}\, ,

where U and V are unitary and S is a positive semidefinite real vector. Such decompositions are of great use in MPS algorithms.

A tensor network is now defined as a set of tensors whose indices are connected in a network pattern, see Fig. 2. Let us consider that some set of the network’s indices \mathbf{c} are contracted over, and the complement \bar{\mathbf{c}} remain uncontracted. Then, this network is a decomposition of some tensor T_{\bar{\mathbf{c}}}. The basic idea of tensor network algorithms utilizing MPSs and their higher dimensional generalizations such as projected entangled-pair states (PEPS) [VC04], [VMC08] and the multiscale entanglement renormalization algorithm (MERA) [Vid07], [EV09] are to represent the high-rank tensor c_{i_1\dots i_L} encoding a many-body wavefunction in a Fock basis,

|\psi\rangle&=\sum_{i_1\dots i_L}c_{i_1\dots i_L}|{i_1\dots i_L}\rangle\, ,

as a tensor network with tensors of small rank. We set the convention that indices which are contracted over in the tensor network decomposition will be denoted by Greek indices, and indices which are left uncontracted will be denoted by Roman indices. The former type of index will be referred to as a bond index, and the latter as a physical index.

7-site MPS

Fig. 2 An MPS with 7 sites and open boundary conditions.

In particular, an MPS imposes a one-dimensional topology on the tensor network such that all the tensors appearing in the decomposition are rank-3. The resulting decomposition has the structure shown in Fig. 2. Explicitly, an MPS may be written in the form

(8)|\psi_{\mathrm{MPS}}\rangle=\sum_{i_1,\dots i_L=1}^{d}\mathrm{Tr}\left(A^{\left[1\right]i_1}\dots A^{\left[L\right]i_L}\right)|i_1\dots i_L\rangle\, .

Here, i_1\dots i_L label the L distinct sites, each of which contains a d dimensional Hilbert space. We will call d the local dimension. The superscript index in brackets \left[j\right] denotes that this is the tensor of the j^{\mathrm{th}} site, as these tensors are not all the same in general. Finally, the trace effectively sums over the first and last dimensions of A^{\left[1\right]i_1} and A^{\left[L\right]i_L} concurrently, and is necessary only for periodic boundary conditions where these dimensions are greater than 1. All algorithms in OSMPS work only with open boundary conditions. Obscured within the matrix product of Eq. (8) is the size of the matrix A^{\left[j\right]i_j} formed from the tensor A^{\left[j\right]} with its physical index held constant. We will refer to the left and right dimensions of this matrix as \chi_{j} and \chi_{j+1}, and the maximum value of \chi_j for any tensor, the bond dimension, will be denoted as \chi. The bond dimension is the parameter which determines the efficiency of an MPS simulation, and also its dominant computational scaling. From the relation S_{\mathrm{vN}} \le \log\chi, where S_{\mathrm{vN}} is the maximum von Neumann entropy of entanglement of any bipartite splitting, we also have that \chi represents an entanglement cutoff for MPSs.

Variational excited state search: eMPS

The algorithm for finding excited states variationally with MPSs, which we call eMPS [WC12], uses a process of local minimization and sweeping similar to the variational ground state search. The difference is that the the local minimization is performed using a projected effective Hamiltonian which projects the variational state into the space orthogonal to all other previously obtained eigenstates. Hence, the convergence parameters which are used for eMPS are identical to those used for the variational ground state search collected in the table of convergence.MPSConvParam. The eMPS method is used whenever the key 'n_excited_states' in parameters has a value greater than zero, see Sec. Specifying the parameters of a simulation.

iMPS as initial ansatz

As the MPS methods used in OSMPS are variational, their efficiency is greatly enhanced by the availability of a good initial guess for the wavefunction. For the ground state, a good guess can be found by using a fixed number of iterations of the infinite size variational ground state search with MPSs (iMPS) to be discussed in more detail in Sec. Infinite-size ground state search: iMPS. In this method, we begin by considering two sites and minimize the energy locally using the Lanczos iteration as discussed in Sec. Variational ground state search. We then decompose the two-site wavefunction into two separate rank three tensors and then use these tensors as an effective environment into which two new sites are embedded as in Fig. 4. The energy of these two inner sites is minimized with the environment sites held fixed, and then these sites are absorbed into the environment and a new pair 1 of sites is inserted into the center. This process is repeated until we have a chain of sites which has the desired length. We call this process the warmup phase.

The parameters used to control the convergence of the warmup phase are included as part of the convergence.MPSConvParam class. In addition to the Lanczos convergence parameters discussed in Sec. Variational ground state search and the table in convergence.MPSConvParam, the warmup-specific parameters are \chi_{\mathrm{warmup}}, the maximum bond dimension allowed during warmup and \epsilon_{\mathrm{warmup}}, the local truncation determining the bond dimension according to Eq. (9) during warmup. These convergence parameters are set in an object of the convergence.MPSConvParam class as specfied in the table in its class documentation. Note that only the values of the warmup convergence parameters from the first set of convergence parameters are used. That is to say, warmup is only used to construct an initial state, and subsequent refinements use the output from variational ground state search as the input to a more refined variational ground state search. Also note that warmup is only relevant to ground state search and not excited state search, and so setting values of the warmup convergence parameters for objects of the convergence.MPSConvParam class to be used for eMPS has no effect.

Warmup with iMPS

Fig. 4 The iMPS iteration successively adds sites to the center of a chain, performs local minimization of the energy with only these sites, and then absorbs these sites into the environment.

1

For finite lattices with an odd number of sites, a single site is inserted on the last iteration.

Infinite-size ground state search: iMPS

In addition to being used to initialize finite-size simulations, a variation of the iMPS method presented in Sec. iMPS as initial ansatz can also be used to find a representation of an infinitely large wavefunction which is translationally invariant under shifts by some number of sites L\ge 2. The convergence behavior of iMPS is determined by an object of the convergence.iMPSConvParam class. The iMPS minimization is performed by inserting L sites at each iMPS iteration as shown in Fig. 4 and then minimizing the energy of these L sites with the given fixed environment. After minimization, these sites are absorbed into the environment, L new sites are added, and the minimization is repeated. The iteration has converged when two unit cells are close in some sense. This sense is measured rigorously by the orthogonality fidelity [McCulloch08]. In OSMPS, we take the stopping condition to be that the orthogonality fidelity is less than the unit-cell averaged truncation error as measured by Eq. (9) for 10 successive iterations. If there is no truncation error, then the stopping criterion is that the orthogonality fidelity is less than \epsilon_{\mathrm{v}} for 10 successive iterations, where \epsilon_{\mathrm{v}} is the 'variance_tol' in tools.MPSConvergenceParameters. This convergence condition on the orthogonality fidelity denotes that the differences between successive iterations are due only to truncation arising from a finite bond dimension. A maximum number of iterations may also be specified as 'max_num_imps_iter'.

The variance is not used to determined convergence of a unit cell to its minimum. Rather, a fixed number of sweeps, specified by 'min_num_sweeps' are used to converge. Hence, the relevant convergence parameters for an iMPS simulation are the parameters of convergence.iMPSConvParam collected in Table in convergence.iMPSConvParam.

Krylov-based time evolution : tMPS

The Krylov-based tMPS algorithm [WC12] has its own class of convergence parameters called convergence.KrylovConvParam. Its parameters are collected in the class description. While many of these parameters have the same names as those in convergence.MPSConvParam they have different interpretations. The Lanczos procedure now refers to the Lanczos method for determining the matrix exponential, and so the stopping criterion is that the difference between our variational state and the true state acted on by the matrix exponential is less than 'lanczos_tol' in the 2-norm. The action of an operator \hat{H} on a state |\psi\rangle and the representation of a sum \sum_kc_k|\phi_k\rangle cannot be represented exactly as MPSs for a given fixed bond dimension, and so both of these operations are performed with variational algorithms as discussed in Ref. [WC12]. The associated convergence parameters for these two variational algorithms are given in the functions description at convergence.KrylovConvParam as well.

Time Evolving Block Decimbation (TEBD)

The TEBD algorithm uses the Sornborger-Stewart decomposition [SS99] instead of the more common Trotter decomposition. As for the Trotter decomposition, this method is only valid for nearest-neighbor Hamiltonians built from site and bond rules. At present, it uses the Krylov subspace method to apply the exponential of the two-site Hamiltonian to the state. The convergence parameters are described in details in convergence.TEBDConvParam.

Time-Dependent Variational Principle (TDVP)

The Time-Dependent Variational Principle is the second algorithm after Krylov to support long-range interactions represented in the MPO. Its convergence parameters are specified in convergence.TDVPConvParam. Details on the algorithm can be found in [HLO+16].

Local Runge-Kutta (LRK)

The Local Runge-Kutta algorithm [ZMK+15] generates a MPO representation of the propagator which can be applied to the state. The details on the convergence parameters are in convergence.LRKConvParam.