Pytest to test coupled OpenFOAM

From cpl-wiki
Jump to: navigation, search

There are two ways to test coupled code, as we cannot invoke an mpiexec version of pytest from inside the run, it must be run from outside as part of a coupled MPI instance.

1) Run a unit testing framework as one part of a coupled MPI run with the code to be tested as process two, allowing us to directly test all received information.
2) Use a unit testing framework to subprocess mpiexec instances for a range of parameters and test the output written to a file

We will cover both methods here.

Directly Coupled

We start with directly coupling a unit testing framework, here pytest, which loops over a range of cases, sending information and checking the exchanged information is as expected. The example here is for SediFOAM but the changes for IcoFOAM are minimal (in terms of exchange of information). The files and OpenFOAM inputs for this example are located under the test folder,


The code is run with OpenFOAM and pytest as two parts of an MPMD run, invoked with a command line of the form,

mpiexec -n 1 CPLSediFOAM -case ./openfoam -parallel : -n 1 py.test -v ./python_dummy/

Here the pytest setup calls MPI_init and sets up CPL library, as well as create an analytical solution to compare this to. The tear down (after yield) finalises both MPI and CPL.

The test is parameterised to simply run for each timestep of the coupled simulation and assert that the error vs. the analytical solution is between expected bounds. Note that the bounds are a bit of a black art, given the finite nature of numerical errors, so need to be tweaked for each case based on the expected output of the coupled run. The pytest code is as follows,

#!/usr/bin/env python
import numpy as np
import pytest

from CouetteAnalytical import CouetteAnalytical as CA

def setup():

    #Import CPL library
    from cplpy import CPL

    #initialise MPI
    from mpi4py import MPI
    comm = MPI.COMM_WORLD

    # Parameters of the cpu topology (cartesian grid)
    npxyz = np.array([1, 1, 1], order='F', dtype=np.int32)
    xyzL = np.array([1., 1., 1.], order='F', dtype=np.float64)
    xyz_orig = np.array([0.0, 0.0, 0.0], order='F', dtype=np.float64)

    #initialise CPL
    CPL = CPL()
    CPL.setup_md(MD_COMM.Create_cart([npxyz[0], npxyz[1], npxyz[2]]), xyzL, xyz_orig)
    recvbuf, sendbuf = CPL.get_arrays(recv_size=9, send_size=8)

    #Analytical solution
    dt = 0.05
    U = 1.0
    nu = 1.004e-2
    Re = xyzL[1]/nu   #Note Reynolds in independent of velocity in analytical fn
    ncx = CPL.get("ncx")
    ncy = CPL.get("ncy")
    ncz = CPL.get("ncz")
    CAObj = CA(Re=Re, U=U, Lmin=0., Lmax=xyzL[1], npoints=2*ncy+1, nmodes=100*ncy)

    #Yield statement delineates end of setup and start of teardown
    yield [CPL, MD_COMM, recvbuf, sendbuf, CAObj, dt, U, nu]

#Main time loop
time = range(1000)
@pytest.mark.parametrize("time", time)
def test_loop(setup, time):

    #Get run paramenters from setup
    CPL, MD_COMM, recvbuf, sendbuf, CAObj, dt, U, nu = setup

    # Recv data: 
    # [Ux, Uy, Uz, gradPx, gradPy, gradPz, divTaux, divTauy, divTauz]
    recvbuf, ierr = CPL.recv(recvbuf)

    # Zero send buffer and set porosity to one
    # [Ux, Uy, Uz, Fx, Fy, Fz, Cd, e]
    sendbuf[...] = 0.

    #Get analytical solution
    y_anal, u_anal = CAObj.get_vprofile(time*dt)

    #Assert error bounds for L2 norm
    ur = np.mean(recvbuf[0,:,:,:],(0,2))
    error = np.sum(np.abs(100*(u_anal[1:-1:2] - ur)/U))
    print(time, "Error = ", error)
    if time < 10:
        assert error < 20., "Error in inital 10 steps greater than 20%"
    elif time < 30:
        assert error < 10., "Error between 10 and 30 steps greater than 10%"
    elif time < 50:
        assert error < 5., "Error between 30 and 50 steps greater than 5%"
    elif time < 300:
        assert error < 3., "Error between 50 and 300 steps greater than 3%"
    elif time < 500:
        assert error < 2., "Error between 300 and 500 steps greater than 2%"
        assert error < 1., "Error after 500 steps greater than 1%"

The output is essentially 1000 tests making sure the information exchanged is as expected,

 python_dummy/[1] PASSED
 python_dummy/[2] PASSED
 python_dummy/[946] PASSED
 python_dummy/[947] PASSED

If an error is detected, it will print something like the following:

assert error < 2., "Error between 300 and 500 steps greater than 2%"

This test is automated on Travis CI to ensure changes to OpenFOAM do not break this essential functionality.

This forms a template for writing a coupled validation, where the error bounds and the information sent can be adapted as needed. For example, the parametrisation could be the value to send and the recv value tested to ensure the expected change is observed in the coupled code.

Subprocess mpiexec

The other example uses python subprocess to create a range of mpi jobs for various test scenarios. In order to make changes to the input system of both OpenFOAM and the mock scripts, run multiple test simultaneously and create the required directory structures, we use simwraplib.

New folders are created for each run and the OpenFOAM input files and python script are copied and edited for each case. In this example, the boundary condition for both are changed and the error compared and printed.

import pytest
import os
import sys
import numpy as np
import subprocess as sp

class cd:
    """Context manager for changing the current working directory"""
    def __init__(self, newPath):
        self.newPath = os.path.expanduser(newPath)

    def __enter__(self):
        self.savedPath = os.getcwd()

    def __exit__(self, etype, value, traceback):

# Import symwraplib
sys.path.insert(0, "./SimWrapPy/")
    import simwraplib as swl
except ImportError:
    cmd = "git clone ./SimWrapPy"
    downloadout = sp.check_output(cmd, shell=True)
    sys.path.insert(0, "./SimWrapPy")
    import simwraplib as swl

#Define test directory based on script file
TEST_DIR = os.path.dirname(os.path.realpath(__file__))

#Parameterise range of cases
params = [0.2, 0.5, 1.0, 2.0]
@pytest.mark.parametrize("wallvel", params)
def test_newtest(wallvel):

    # Inputs that are the same for every thread
    basedir = TEST_DIR
    srcdir = None
    executable = "/CPLSediFOAM"
    inputfile = "/openfoam"
    rundir = TEST_DIR + "/run" + str(wallvel)

    #Clean previous result, generate grid and decompose for parallel run
    with cd (TEST_DIR+"/"+inputfile):
        sp.check_output("python -f", shell=True)
        sp.check_output("blockMesh", shell=True)
        sp.check_output("decomposePar", shell=True)

    #Setup Changes
    keyvals = {"boundaryField":{"top":{"type":"fixedValue", "value":[[wallvel,0,0]]}, 
                                "bottom":{"type":"keep", "value":"keep"},
                                "streamwiseIn":{"type":"keep", "neighbourPatch":"keep"},
                                "streamwiseOut":{"type":"keep", "neighbourPatch":"keep"},
                                "front":{"type":"keep", "neighbourPatch":"keep"},
                                "back":{"type":"keep", "neighbourPatch":"keep"}}}
    changes = {"Ub":keyvals}

    with cd(TEST_DIR):

        #Setup a LAMMPS run object
        of = swl.OpenFOAMRun(None, basedir, rundir,
                             executable, inputfile,

        #Setup a mock script
        mockscript = "./python_dummy/"
        mock = swl.ScriptRun(rundir, mockscript, inputchanges={"U = ": wallvel})

        #Setup a coupled run
        run = swl.CPLRun(None, basedir, rundir, [mock, of],

        #Run the case
        run.execute(blocking=True, print_output=True)