Create a directory, which will contain all libraries, source code, tools, and documentation for magpar, and set the environment variable MAGPAR_HOME:

  # change into any directory, where you want to install magpar
  # for example in your $HOME/work directory
  cd $HOME; mkdir work; cd work
  # download the magpar source archive by hand or using wget:
  # unpack the archive
  tar xzvf $lib.tar.gz
  cd $lib
  MAGPAR_HOME=$PWD; export MAGPAR_HOME   # sh/bash syntax (use "setenv" for csh)
  PD=$MAGPAR_HOME/libs; export PD

If you are upgrading from a previous version of magpar, you can usually reuse the libraries, which you have already compiled and installed. However, please note the ChangeLog and upgrade libraries as required or recommended.

The current version of magpar has been developed and tested with the configuration and library versions defined in .

Links to the websites of the libraries can be found in the list of Required Software.

Create$HOSTNAME using (or one of the other*):


and edit it:

It should not be necessary to modify Makefile or at all any more!

Please refer to the FAQ for tips and suggestions for the installation of magpar on specific systems and software environments.

Automated Installation

The (manual) installation procedures described below are now conveniently combined in Makefile.libs .

Just simply do

  cd $MAGPAR_HOME/src
  make -f Makefile.libs

All libraries will be downloaded automatically using "wget", configured, compiled, and installed in $PD in the following order:

  atlas lapack mpi parmetis sundials petsc tao zlib libpng

It is also possible to install the libraries one at a time like this (e.g. PETSc)

  cd $MAGPAR_HOME/src
  make -f Makefile.libs petsc

using the names listed above. This makes it easier to use precompiled packages (e.g. Precompiled packages on Ubuntu/Debian) and then just install the remaining ones with the convenience of using Makefile.libs .

If this does not work, then please follow the manual installation instructions below.

Once all libraries are compiled and installed, compile magpar as described below.

Manual Installation

Check for Required Software which is already preinstalled on your machine. For example, there are Precompiled packages on Ubuntu/Debian available. Download, unpack, compile and install all other required libraries in the order given below in the directory $PD. Some libraries are optional,and only required if you want to try different linear solvers (e.g. BlockSolve95, hypre, SuperLU). It is highly recommended to use any machine specific (vendor provided and highly optimized) libraries. On most high performance machines there are optimized BLAS, LAPACK, and MPI libraries available (cf. FAQ : Optimized BLAS libraries). In this case you just have to set the paths to your libraries properly, when you configure and compile various packages.

If you have trouble installing any of the required libraries, please check their respective installation guides/documentation/FAQs/website first. The URLs of their websites can be found on the Required Software page.


For your convenience get one of the binary packages for your hardware platform from the stable branch of ATLAS (unless you do already have Optimized BLAS libraries).

  cd $PD
  # set lib with the name of your ATLAS library
  tar xzvf $lib
  # create a symbolic link to the directory with the ATLAS binaries:
  ln -s Linux_* atlas
  # rename (incomplete) lapack library provided by ATLAS (cf. LAPACK below)
  mv $lapacklib $lapacklib.atlas


Debian, RedHat and other distributors provide precompiled binaries of LAPACK. Try the Precompiled packages on Ubuntu/Debian or check the web for availablility.
You may also recompile it from source:

  # set Fortran compiler
  # GNU GCC >=4.0 Fortran 77/95: gfortran
  FC=gfortran; TIMER=INT_ETIME
  # GNU GCC < 4.0 Fortran 77:    g77
  # check that Fortran compiler works
  $FC --version
  cd $PD
  tar xzvf $lib
  cd lapack-*
  # add CPU specific options to OPTS, e.g. -march=pentium4 -msse2 (cf. man gcc)
  # set correct Fortran compiler (check additional options in!):
  make "FORTRAN=$FC" "LOADER=$FC" \
  "BLASLIB=$PD/atlas/lib/libf77blas.a $PD/atlas/lib/libatlas.a" \
  "OPTS=-funroll-all-loops -O3 $OOPTS" \
  # run tests (optional)
  make lapack_testing

Now we have to add the missing LAPACK functions to the ATLAS library:
(cf. $PD/atlas/README, Building a complete LAPACK library)

  cd $PD/atlas/lib
  cp $lapacklib.atlas $lapacklib
  mkdir tmp; cd tmp
  ar x $PD/lapack-*/lapack_LINUX.a
  ar r ../liblapack.a *.o
  cd ..; rm -rf tmp


MPICH, OpenMPI, LAM/MPI, or any other MPI library, which implements the MPI standard (version 1 or 2) may be used.

You need rsh (recommended) or ssh to be installed and configured properly! Don't forget to create a ".rhosts" file with the names of all machines (also your local machine!) in your home directory (cf. "man rhosts"). The configuration can be tested with "rsh $HOSTNAME uname -a". You may also use ssh for encrypted communication between processors.

The directory $PD/mpi/bin should be added to the $PATH variable. Update your login scripts, e.g. .bashrc, .login, .profile, to make this permanent by appending the following code snippet:

  export PATH

or the programs installed in $PD/mpi/bin should be copied to $HOME/bin or any other directory within your $PATH, so that mpirun and other MPI tools can be called from the command line.


If ssh should be used instead of rsh for login on remote machines use "./configure -rsh=ssh" when compiling MPICH. In this case public key authentication should be configured for ssh to enable login without passwords (cf. "man ssh").

The configure-script of MPICH will try both, rsh and ssh, and print a warning if neither service is configured properly.

  cd $PD
  wget -N --retr-symlinks$lib
  # better download the latest version from
  tar xzvf $lib
  # change into mpich2 subdirectory (adjust to the downloaded version)
  cd mpich2-*
  # use "ssm" (sockets and shared memory) for use on clusters of SMPs
  # (communication on the same machine goes through shared memory;
  # communication between different machines goes over sockets)
  # instead of default "sock"
  ./configure --prefix=$PD/mpich2 --with-device=ch3:ssm 2>&1 | tee configure.log
  make -j 1 2>&1 | tee make.log
  make install 2>&1 | tee install.log
  # set symbolic link to MPICH installation directory
  ln -s mpich2 $PD/mpi
  # Please refer to $PD/mpi/README or
  # $PD/mpi/doc on how to use MPICH2 and
  # start a ring of MPI's process managers mpd!

Installation instructions for MPICH1 and LAM/MPI have been moved to the FAQ.


  cd $PD
  wget -N$lib.tar.gz
  tar xzvf $lib
  cd $lib
  make "CC=$PD/mpi/bin/mpicc" "LD=$PD/mpi/bin/mpicc"
  # run tests (optional)
  cd Graphs
  $PD/mpi/bin/mpirun -np 4 ptest rotor.graph
  # more tests in ParMetis-3.1.1/INSTALL


(SUNDIALS version 2.3.0)

Download the library from the SUNDIALS website (registration required).

  cd $PD
  tar xzvf $lib.tar.gz
  cd $(PD)/$lib
  # set compiler options (modify for your setup!)
  # add CPU specific options, e.g. -march=pentium4 -msse2 (cf. man gcc)
  export CFLAGS
  ./configure --prefix=$PD/$lib --with-mpi-root=$PD/mpi
  make && make -i install
  # (generates static libraries and installs libraries and include files
  # in $PD/$lib/libs and $PD/$lib/include)


(PETSc version 2.3.0 and later)

Starting with PETSc version 2.3.0 you have to use the automatic Python-based configure system, which requires Python 2.2 or later. Please refer to the FAQ Installing Python if you need to install Python by hand.

  cd $PD
  tar xzvf $lib.tar.gz
  cd $lib
  # set environment variables
  # (here: bash style - use "setenv" in sh/csh)
  export PETSC_DIR
  export PETSC_ARCH
  export PRECISION
  # edit
  # (select MPI, optional libraries, optimization options, etc.)
  # use the templates in $PETSC_DIR/config/ for platforms other than Linux
  # copy PETSc configuration script
  # (needs to be a copy - must not be a symbolic link!)
  cp $MAGPAR_HOME/src/ $PETSC_DIR/config/
  # Run
  # ./config/ --help
  # to see all command line options for
  # for static binaries edit
  # $PETSC_DIR/bmake/PETSc-config-magpar/petscconf (recommended):
  # remove all occurences of "-lgcc_s" and add "-static" to the linker flags:
  # CC_LINKER_FLAGS =   -Wall -O3 -static
  make all
  # run tests (optional)
  make test

Also refer to the installation instructions on the PETSc homepage!


magpar requires
(PETSc version 2.3.3 and TAO 1.9) (highly recommended) or
(PETSc version 2.3.2 and TAO 1.8.2) or
(PETSc version 2.3.0 and TAO 1.8) or
(PETSc version 2.2.1 and TAO 1.7) or
(PETSc version 2.2.0 and TAO 1.6)

  wget -N$lib
  tar xzvf $lib
  TAO_DIR=$PD/$lib; export TAO_DIR
  cd $TAO_DIR


  cd $PD
  wget -N$lib.tar.gz
  tar xzvf $lib.tar.gz
  ln -s $lib zlib
  cd $lib
  make CFLAGS="-O -fPIC" && make test


  cd $PD
  wget -N$lib.tar.gz?download
  tar xzvf $lib.tar.gz
  ln -s $lib libpng
  cd $lib
  ./configure --prefix=$instdir --enable-shared=no 2>&1 | tee configure.log
  CFLAGS="-I$PD/zlib"; export CFLAGS
  LDFLAGS="-L$PD/zlib"; export LDFLAGS
  make 2>&1 | tee make.log
  make install 2>&1 | tee makeinst.log
  make check 2>&1 | tee makecheck.log
  # alternatively use the old method with a static Makefile:
  cp scripts/makefile.linux Makefile
  make ZLIBLIB=../zlib ZLIBINC=../zlib && make test


Once all libraries are compiled and installed, compile magpar with

  cd $MAGPAR_HOME/src

If everything compiled (hopefully) ok, you should get the executable magpar.exe.

magpar - Parallel Finite Element Micromagnetics Package
Copyright (C) 2002-2009 Werner Scholz