Wiki

Clone wiki

blopex / ProjectHome

General

Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) is a package, written in C and MATLAB/OCTAVE, that includes an eigensolver implemented with the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG). Its main features are: a matrix-free iterative method for computing several extreme eigenpairs of symmetric positive generalized eigenproblems; a user-defined symmetric positive preconditioner; robustness with respect to random initial approximations, variable preconditioners, and ill-conditioning of the stiffness matrix; and apparently optimal convergence speed.

BLOPEX supports parallel MPI-based computations. BLOPEX is incorporated in the HYPRE package and is available as an external block to the PETSc package. SLEPc and PHAML have interfaces to call BLOPEX eigensolvers, as well as DevTools by Visual Kinematics.

BLOPEX relies on volunteers in its support and development. If you want to volunteer, which would be extremely appreciated, please drop a line to 2AndrewKnyazev@gmail.com .

How to use it

  • This code does NOT compute ALL eigenvalues. It is designed to compute no more than ~20%, i.e., if the matrix size is 100, do NOT attempt to compute more that ~20 eigenpairs, otherwise, the code will crash.

  • We do NOT provide the binaries, only the source, mostly in C, some in FORTRAN and MATLAB/OCTAVE.

  • To get the source, use the "Source" tab, specifically read the information from http://code.google.com/p/blopex/source/checkout . Do NOT use the "Downloads" tab above --- the downloads are for developers only.

  • In HYPRE, the BLOPEX code is already incorporated. Just download and compile Hypre.

  • In PETSc release 3.1, the BLOPEX code is available as an external package. Download PETSc, and use the option --download-blopex=1 in the PETSc configure, prior to PETSc compilation.

  • In PETSc release 3.2 and later, the BLOPEX code has been moved to SLEPc, where it is now available as an external package. Download and install PETSc first. Then download SLEPc, and use the option --download-blopex=1 in the SLEPc configure, prior to SLEPc make.

  • To build a stand-alone version, one needs the core of BLOPEX, which is blopex_abstract, which contains the eigenxolvers codes and can be compiled into the BLOPEX library, using the sample makefiles provided. One also needs blopex_serial_double and/or blopex_serial_complex. The current stand-alone version is for serial computations only. For parallel computations, one must use PETSc and/or Hypre.

  • A native MATLAB/Octave code is available at blopex_tool/matlab/lobpcg, see also http://www.mathworks.com/matlabcentral/fileexchange/48-lobpcg-m

  • A native Java code is available from a sister project http://code.google.com/p/sparse-eigensolvers-java/

  • An interface of our C blopex_abstract code to MATLAB (both 32 and 64 bit) is available at blopex_matlab. This interface is manly designed for testing of our C blopex_abstract code. For actual computations in MATLAB/OCTAVE, is it instead recommended to use the native MATLAB/OCTAVE code above.

  • blopex_petsc and blopex_hypre are provided for reference only since they are already in PETSc and Hypre, correspondingly.

We also provide some useful testing tools in blopex_tools/matlab : * laplacian builds the matrix of a 1-,2-, or 3-D Laplacian, see http://en.wikipedia.org/wiki/Kronecker_sum_of_discrete_Laplacians and computes their eigenpairs using the explicit formulas from http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors_of_the_second_derivative The same code is also available at http://www.mathworks.com/matlabcentral/fileexchange/27279-laplacian-in-1d-2d-or-3d * matlab2hypre converts matrices between hypre and MATLAB/OCTAVE. The same code is also available at http://www.mathworks.com/matlabcentral/fileexchange/27437-matlab2hypre-and-hypre2matlab

BLOPEX Examples

There are a few codes, used to test the library, which can be viewed as examples, called "drivers", see, e.g., the code for single-thread double precision at https://code.google.com/p/blopex/source/browse/trunk/blopex_serial_double/driver/serial_driver.c

hypre-BLOPEX has its own example, search hypre documentation for LOBPCG or eigenvalue. Several hypre drivers are used for testing in the BLOPEX 2007 paper listed below.

SLEPc has lots of examples. Use the option --download-blopex=1 in the SLEPc configure, prior to SLEPc make and read SLEPc docs how to run SLEPc examples using BLOPEX eigenvalue solver LOBPCG.

Please contact hypre and SLEPc developers directly if you have any questions about their examples of using LOBPCG-BLOPEX.

Can BLOPEX be made to ran faster?

Yes, BLOPEX can be made to ran much faster. None of current BLOPEX implementations, including those in hypre and SLEPc, uses a "true multivector". A "multivector" is the main structure in BLOPEX, representing blocks of vectors. LOBPCG code requires performing basic algebraic operations, e.g., sums, scalar products, and application of functions, with multivectors. However, to our knowledge, in all current (2013) implementations of BLOPEX, the multivector is faked, i.e., is simply substituted by a collection of individual vectors. Thus, e.g., a sum of fake multivectors is a loop of sums of vectors, which is only Level 1 BLAS. A "true multivector" would likely accelerate the code by orders of magnitude in parallel runs.

We currently (2013) do not have resources and thus plans to implement the "true multivector" in serial versions of BLOPEX, even though that would make the code ran faster several times due to the use of Level 3 BLAS. If you volunteer to do it, we can help with advice, and would be glad to add you to the team of developers. It is not that difficult, but is surely time consuming and requires proper qualifications. That could be a fun project, e.g., for INTEL and AMD developers, to implement a true multivector library, make a BLOPEX interface to it, and test it on important applications.

hypre has an incomplete (as of 2013) implementation of the "true multivector," with some key functions still missing, and apparently no plans to complete it.

PETSc developers plan (Nov. 2013) to implement the "true multivector" in the next PETSc release in the format of TAIJ matrix that will allow MatMult, MatSolve etc, using aggregated communication for distribution of the multivector between the processors and contiguous local structure suitable for high level BLAS. Please contact petsc-maint@mcs.anl.gov for questions/comments.

RELATED PROJECTS:

REFERENCES:

  1. A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov, Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in hypre and PETSc (2007). SIAM Journal on Scientific Computing (SISC). 25(5): 2224-2239, http://dx.doi.org/10.1137/060661624.

  2. A. V. Knyazev, Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method. SIAM Journal on Scientific Computing 23 (2001), no. 2, pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124


This material was based upon work supported by the National Science Foundation under Grant No. 1115734. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Updated