Sections:
Formatting:
Variables:
Simple templates for magpar functions:
Initialization function (called during serial or parallel initialization):
int MagparFunctionInit(GridData *gdata,Vec vec1,PetscReal *real1) { /* First executable line of each PETSc function used for error handling. */ /* macros defined in griddata.h, see also PetscFunctionBegin */ /* print log information on stdout (source file, function name) */ MagparFunctionLogBegin; [do stuff] /* print timing information */ MagparFunctionLogReturn(0); }
Solver function (called many times during solution):
int MagparFunction(GridData *gdata,Vec vec1,PetscReal *real1) { /* by default do not print any information on stdout */ /* if "-info" option is active: print log information on stdout */ MagparFunctionInfoBegin; [do stuff] /* by default do not print any information on stdout */ /* if "-info" option is active: print timing information on stdout */ MagparFunctionInfoReturn(0); }
The following suggestions might be helpful for debugging:
PetscPrintf(PETSC_COMM_WORLD,"deb01: Au1\n"); ierr = MatView(Au1,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); PetscPrintf(PETSC_COMM_WORLD,"deb02: u1\n"); ierr = VecView(u1,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr); /* get output from all processors */ PetscBarrier(PETSC_NULL); PetscSynchronizedPrintf(PETSC_COMM_WORLD, "[%i]deb03: %i %g!\n",rank,intvar,doublevar ); PetscSynchronizedFlush(PETSC_COMM_WORLD);
In addition, it is often useful to ensure that all parallel processes are synchronized at a certain position in the program. This is easily accomplished by using the PETSc function "PetscBarrier", which blocks until this routine is executed by all processors owning a certain PETSc object, e.g. the global magnetization vector "gdata->M".
PetscPrintf(PETSC_COMM_WORLD,"deb03: barrier\n");
PetscBarrier(PETSC_NULL);
Required information:
The more information you provide, the easier it is to track down the problem.
Bug reports may be sent to: magpar(at)magpar.net