Simulating transport such as diffusion or flow is a key output of PNM. This notebook will explain the underlying process for these simulations in OpenPNM.

The transport of a species between pore i and j is dictated by the conductance g of the conduit between these pores. g depends on the geometry as well as the fluid properties. Consider the case of diffusion which is described by Fick’s law:

$n_A = D_{AB}\frac{A}{L} \Delta C_A = g^D \Delta C_A$

$$g^D$$ is the diffusive conductance and is a function of pore diameter, shape, length, as well as the physical properties of the fluid such as composition, temperature, pressure, and so on. Determination of $$g^D$$ is the subject of a different notebook, so for the present purposes random values between 0 and 1.0 will be used. Consider at 2x2 network:

import openpnm as op
import numpy as np
op.visualization.set_mpl_style()

pn = op.network.Cubic(shape=[2, 2, 1])
air = op.phase.Phase(network=pn)
air['throat.diffusive_conductance'] = np.random.rand(pn.Nt)


## Creating the Coefficient Matrix#

The rate equation given above applies to each throat in the network. In a pore network we want to solve for the quantity of interest in each pore. Each pore in the network has many throats. Depending on the concentrations in the neighboring pores the rate of diffusion through each throat may either be into or away from a given pore. At steady state and in the absence of reaction, the net rate into pore i will be zero. Thus we can write the mass balance equation for pore i using summation notation as:

$\Sigma g_{i, j} (x_j - x_i) = 0$

We can write the above equation for each pore in the network and obtain a system of coupled linear equations such that:

$x = A^{-1} b$

where b is the right hand side of the balance equation.

import scipy.sparse.csgraph as csg
A = csg.laplacian(am).todense()
print(A)

[[-0.34571008  0.33693857  0.00877151  0.        ]
[ 0.33693857 -0.81731816  0.          0.48037959]
[ 0.00877151  0.         -0.91197073  0.90319922]
[ 0.          0.48037959  0.90319922 -1.38357881]]


## Applying Value and Rate Boundary Conditions#

If b=0 for all pores then x=0 throughout the network. We must apply boundary conditions to obtain a non-trivial result. One option is to apply “rate” boundary conditions, such that the net movement of the species in or out of a pore is not zero, such that:

$\Sigma g_{i, j} (x_j - x_i) = r_i$

Another option is to specify the value of the species of interest in a pore. In this case the balance equation can be replaced with:

$x_i = b_i$

Let’s set $$x_0 = 2.0$$ and $$r_3 = -0.5$$.

b = np.zeros([pn.Np, ], dtype=float)
b = 2.0
A[0, :] = 0.0
A[0, 0] = 1.0
b = 0.5
print("The A matrix is:\n", A)
print("The b matrix is:\n", b)

The A matrix is:
[[ 1.          0.          0.          0.        ]
[ 0.33693857 -0.81731816  0.          0.48037959]
[ 0.00877151  0.         -0.91197073  0.90319922]
[ 0.          0.48037959  0.90319922 -1.38357881]]
The b matrix is:
[2.  0.  0.  0.5]


## Solving the System of Equations#

One the coefficient matrix has been built from the system of equation, and boundary conditions are applied, we can solve the system $$x = A^{-1}b$$ to find $$x_i$$ in each pore. This can be done using any number of numerical solvers

from scipy.linalg import solve
x = solve(A, b)
print(x)

[ 2.          0.57841014 -0.39543049 -0.41869396]


## Sparse Matrices#

In the above example the A matrix was small so we could convert it to dense form. In a real network the coefficient matrix will be much larger so requires dense representation. The above process is the same, but some of the numerical steps are different to account for the sparse nature of the coefficient matrix. In particular the application of value BCs is more complicated:

import scipy.sparse.csgraph as csg
A = csg.laplacian(am)
b = np.zeros([pn.Np, ], dtype=float)


We can set the rate boundary condition in pore 3 in the same manner:

b = 0.5


Setting the value BC on pore 0 requires setting all elements in row 0 to 0, except the diagonal. However, in sparse form this cannot be done with direct indexing. Instead, we can inspect the row attribute of A to find entries on row 0:

print(A.row)
hits = A.row == 0

[0 2 0 1 1 3 2 3 0 1 2 3]


Now we set the data values to zero at these locations:

A.data[hits] = 0.0


Then add a 1.0 in the diagonal:

diag = A.diagonal()
diag = 1.0
A.setdiag(diag)


And finally we can use eliminate_zeros to convert it into a proper sparse representation again:

A.eliminate_zeros()


And let’s not forget to add the value BC to b:

b = 2.0


Let’s inspect our new coefficient matrix to be sure:

print("The A matrix is:\n", A.todense())
print("The b matrix is:\n", b)

The A matrix is:
[[ 1.          0.          0.          0.        ]
[ 0.33693857 -0.81731816  0.          0.48037959]
[ 0.00877151  0.         -0.91197073  0.90319922]
[ 0.          0.48037959  0.90319922 -1.38357881]]
The b matrix is:
[2.  0.  0.  0.5]


Now we are ready to use the sparse solvers in scipy:

from scipy.sparse.linalg import spsolve
x = spsolve(A.tocsr(), b)
print(x)

[ 2.          0.57841014 -0.39543049 -0.41869396]


We can see the numerical results are idential. The additional effort required to deal with the sparse format is worth it since large network would become numerically intractable very quickly.