# 12.3 Eigenvalue Problems

## 12.3 Eigenvalue Problems

Eigenvalue problems involve the following: Given a matrix A, find the numbers λ and vectors x such that Ax = λx, or more generally, Ax = λMx, where M is another matrix. The distinction between sparse and dense matrices does not play so large a role as it does in systems solving; for eigenvalues the main distinction is whether one wants all the possible λ values and attendant x vectors, or just a subset, typically the few largest or smallest. ScaLAPACK and PLAPACK are packages that start with a dense matrix to calculate all or potentially part of the spectrum (Section 12.3.1), while Arpack (Section 12.3.2) is preferable when only part of the spectrum is wanted; since it uses reverse communication, Arpack can handle matrices in sparse format.

### 12.3.1 Eigenvalue Computations in ScaLAPACK and PLAPACK

In addition to the linear system solvers mentioned above, ScaLAPACK has eigenvalue routines. For the symmetric eigenvalue problem there are driver routines; for the nonsymmetric (non-Hermitian) problem, you need to call individual computational routines.

• For the single- and double-precision real symmetric eigenvalue problem, there are simple drivers PSSYEV and PDSYEV, respectively, as well as expert drivers with X appended.

• For the complex Hermitian problem there are only expert drivers: PCHEEVX and PZHEEVX for single and double precision, respectively.

• The nonsymmetric eigenvalue problem is tackled in two steps: reduction to upper Hessenberg form by PxGEHERD, followed by reduction of the Hessenberg matrix to Schur form by PxLAHQR.

• ScaLAPACK has routines for the generalized eigenvalue problem only in the symmetric (Hermitian) definite case: PxSYGST (with x=S,D), and PxHEGST (with x=C,Z).

PLAPACK version 3.2 (announced for release in late 2003) contains an implementation of the "Holy Grail" eigensolver, which is also present in LAPACK. The functionality of the PLAPACK eigensolvers is twofold. First, there is a parallel eigensolver for tridiagonal symmetric matrices extending the algorithm presented by Dhillon and Parlett ; this routine allows the computation of all or a subset of the eigenvalues and eigenvectors with a given number of processors. This is claimed to be the fastest parallel tridiagonal eigensolver available. Second, the tridiagonal eigensolver is merged with a routine to reduce a dense symmetric matrix to tridiagonal form and with a routine for the backtransformation, thus obtaining a dense eigensolver for symmetric matrices. Large problems (n > 100,000) can be tackled with this routine on a 256-processor machine.

### 12.3.2 Eigenvalue Computations in Arpack

Often, in eigenvalue computations, not all eigenvalues or eigenvectors are needed. In such cases One is typically interested in the largest or smallest eigenvalues of the spectrum, or eigenvalues clustered around a certain value.

While ScaLAPACK has routines that can compute a full spectrum, Arpack focuses on the computation of a small number of eigenvalues and corresponding eigenvectors. It is based on the Arnoldi method. 

The Arnoldi method is unsuitable for finding eigenvalues in the interior of the spectrum, so such eigenvalues are found by "shift-invert": Given some σ close to the eigenvalues being sought, one solves the eigenvalue equation (A - σ)-1 x = μx, since eigenvalues of A close to σ will become the largest eigenvalues of (A - σ)-1.

#### Reverse Communication Program Structure

The Arnoldi method has the attractive property of accessing the matrix only through the matrix-vector product operation. However, finding eigenvalues other than the largest requires solving linear systems with the given matrix or one derived from it.

Since the Arnoldi method can be formulated in terms of the matrix-vector product operation, Arpack (strictly speaking) never needs access to individual matrix elements. To take advantage of this fact, Arpack uses a technique called "reverse communication," which dispenses with the need for the user to pass the matrix to the library routines. Thus, Arpack can work with any user data structure or even with matrices that are only operatively defined.

With reverse communication, whenever a matrix operation is needed, control is passed back to the user program, with a return parameter indicating what operation is being requested. The user then satisfies this request, with input and output in arrays that are passed to the library, and calls the library routine again, indicating that the operation has been performed.

Thus, the structure of a routine using Arpack will be along the following lines:

```      ido = 0
10    continue
call dsaupd( ido, .... )
if (ido.eq.-1 .or. ido.eq.1) then
C        perform matrix vector product
goto 10
end if
```

For the case of shift-invert or the generalized eigenvalue problem, the conditional has more clauses, but the structure stays the same.

Arpack can be used in a number of different modes, covering the regular and generalized eigenvalue problem, symmetry of the matrix A (and possibly M), and various parts of the spectrum to be computed. Rather than explaining these modes, we refer the reader to the excellent example drivers provided in the Arpack distribution.

#### Practical Aspects of Using Arpack

Arpack is written in Fortran. No C prototypes are given, but the package is easily interfaced to a C code, observing the usual linker naming conventions for Fortran and C routines. The parallel version of Arpack, PArpack, can be based on either MPI or the BLACS, the communication layer of Scalapack; see Section 12.1.1. Arpack uses LAPACK and, unfortunately, relies on an older version than the current. While this version is included in the distribution, it cannot easily be replaced by a vendor-optimized version.

The flip side of the data independence obtained by reverse communication is that the user must provide a matrix-vector product, a task that—especially in the parallel case—is not trivial. Also, in the shift-invert case the user must provide a linear system solver. We recommend the use of a package such as Aztec  (see Section 12.2.2), or PETSc  (see Section 12.2.3).

In fact, the pure Arnoldi method would have prohibitive memory demands; what is used here is the "implicitly restarted Arnoldi method" .  BackCover  Beowulf Cluster Computing with Linux, Second Edition  Series Foreword  Foreword  Preface to the Second Edition  Preface to the First Edition  Chapter 1: So You Want to Use a Cluster  Part I: Enabling Technologies  Part II: Parallel Programming  Chapter 7: An Introduction to Writing Parallel Programs for Clusters  Chapter 8: Parallel Programming with MPI  Chapter 9: Advanced Topics in MPI Programming  Chapter 10: Parallel Virtual Machine  Chapter 11: Fault-Tolerant and Adaptive Programs with PVM  Chapter 12: Numerical and Scientific Software for Clusters  12.1 Dense Linear System Solving  12.2 Sparse Linear System Solving  12.3 Eigenvalue Problems  12.4 FFTW  12.5 Load Balancing  12.6 Support Libraries  12.7 Scientific Applications  12.8 Freely Available Software for Linear Algebra on the Web  Part III: Managing Clusters  Appendix A: Glossary of Terms  Appendix B: Annotated Reading List  Appendix C: Annotated URLs  References  List of Figures  List of Tables