Matrix Operations

API Keywords: matrix matrixf matrixcf linear algebra

Matrices are used for solving linear systems of equations and are used extensively in polynomial fitting, adaptive equalization, and filter design. In liquid , matrices are represented as just arrays of a single dimension, and do not rely on special objects for their manipulation. This is to help portability of the code and ease of integration into other libraries. Here is a simple example of the matrix interface:

#include <liquid/liquid.h>

int main() {
    // designate X as a 4 x 4 matrix
    float X[16] = {
       0.84382,  -2.38304,   1.43061,  -1.66604,
       3.99475,   0.88066,   4.69373,   0.44563,
       7.28072,  -2.06608,   0.67074,   9.80657,
       6.07741,  -3.93099,   1.22826,  -0.42142};
    matrixf_print(X,4,4);

    // L/U decomp (Doolittle's method)
    float L[16], U[16], P[16];
    matrixf_ludecomp_doolittle(X,4,4,L,U,P);
}

Notice that all routines for the type float are prefaced with matrixf . This follows the naming convention of the standard C library routines which append an f to the end of methods operating on floating-point precision types. Similar matrix interfaces exist in liquid for double ( matrix ), double complex ( matrixc ), and float complex ( matrixcf ).

Basic math operations

This section describes the basic matrix math operations, including addition, subtraction, point-wise multiplication and division, transposition, and initializing the identity matrix.

matrix_access (access element)

Because matrices in liquid are really just one-dimensional arrays, indexing matrix values for storage or retrieval is as straightforward as indexing the array itself. liquid also provides a simple macro for ensuring the proper value is returned. matrix_access(X,R,C,r,c) will access the element of a \(R \times C\) matrix \(\vec{X}\) at row \(r\) and column \(c\) . This method is really just a pre-processor macro which performs a literal string replacement

#define matrix_access(X,R,C,r,c) ((X)[(r)*(C)+(c)])

and can be used for both setting and retrieving values of a matrix. For example,

X =
  0.406911   0.118444   0.923281   0.827254   0.463265
  0.038897   0.132381   0.061137   0.880045   0.570341
  0.151206   0.439508   0.695207   0.215935   0.999683
  0.808384   0.601597   0.149171   0.975722   0.205819

float v = matrix_access(X,4,5,0,1);
v =
  0.118444

matrix_access(X,4,5,2,3) = 0;
X =
  0.406911   0.118444   0.923281   0.827254   0.463265
  0.038897   0.132381   0.061137   0.880045   0.570341
  0.151206   0.439508   0.695207   0.0        0.999683
  0.808384   0.601597   0.149171   0.975722   0.205819

Because this method is really just a macro, there is no error-checking to ensure that one is accessing the matrix within its memory bounds. Therefore, special care must be taken when programming. Furthermore, matrix_access() can be used for all matrix types ( matrixf , matrixcf , etc.).

matrixf_add, matrixf_sub, matrixf_pmul, and matrixf_pdiv (scalar math operations)

The matrixf_add(*x,*y,*z,m,n) , matrixf_sub(*x,*y,*z,m,n) , matrixf_pmul(*x,*y,*z,m,n) , and matrixf_pdiv(*x,*y,*z,m,n) methods perform point-wise (scalar) addition, subtraction, multiplication, and division of the elements of two \(n \times m\) matrices, \(\vec{X}\) and \(\vec{Y}\) . That is, \(\vec{Z}_{i,k} = \vec{X}_{i,k} + \vec{Y}_{i,k}\) for all \(i\) , \(k\) . The same holds true for subtraction, multiplication, and division. It is very important to understand the difference between the methods matrixf_pmul() and matrixf_mul() , as well as matrixf_pdiv() and matrixf_div() . In each case the latter performs a vastly different operation from matrixf_mul() and matrixf_div() (see Sections[ref:section-matrix-mul] and[ref:section-matrix-div] , respectively).

X =                         Y =
  0.59027   0.83429           0.764108   0.741641
  0.67779   0.19793           0.660932   0.041723
  0.95075   0.33980           0.972282   0.347090

matrixf_pmul(X,Y,Z,2,3);
Z =
  0.4510300   0.6187437
  0.4479731   0.0082582
  0.9243971   0.1179412

matrixf_trans(), matrixf_hermitian() (transpose matrix)

The matrixf_trans(X,m,n,XT) method performs the conjugate matrix transpose operation on an\(m \times n\) matrix \(\vec{X}\) . That is, the matrix is flipped on its main diagonal and the conjugate of each element is taken. Formally, \(\vec{A}^T_{i,j} = \vec{A}_{j,i}^*\) . Here's a simple example:

$$ \left[ \begin{array}{ccc} 0 & 1 & 2 \\ 3 & 4 & 5 \\ \end{array} \right]^T = \left[ \begin{array}{cc} 0 & 3 \\ 1 & 4 \\ 2 & 5 \\ \end{array} \right] $$

Similarly, the matrixf_hermitian(X,m,n,XH) computes the Hermitian transpose which is identical to the regular transpose but without the conjugation operation, viz\(\vec{A}^H_{i,j} = \vec{A}_{j,i}\) .

matrixf_eye() (identity matrix)

The matrixf_eye(*x,n) method generates the \(n \times n\) identity matrix \(\vec{I}_n\) :

$$ \vec{I}_n = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \\ 0 & 0 & \cdots & 1 \\ \end{bmatrix} $$

Elementary math operations

This section describes elementary math operations for linear systems of equations.

matrixf_swaprows() (swap rows)

Matrix row-swapping is often necessary to express a matrix in its row-reduced echelon form. The matrixf_swaprows(*X,m,n,i,j) method simply swaps rows \(i\) and\(j\) of an \(m \times n\) matrix \(\vec{X}\) , viz

x =
  0.84381998 -2.38303995  1.43060994 -1.66603994
  3.99475002  0.88066000  4.69372988  0.44563001
  7.28072023 -2.06608009  0.67074001  9.80657005
  6.07741022 -3.93098998  1.22826004 -0.42142001

matrixf_swaprows(x,4,4,0,2);
  7.28072023 -2.06608009  0.67074001  9.80657005
  3.99475002  0.88066000  4.69372988  0.44563001
  0.84381998 -2.38303995  1.43060994 -1.66603994
  6.07741022 -3.93098998  1.22826004 -0.42142001

matrixf_pivot() (pivoting)

NOTE--terminology for "pivot" is different from literature.

Given an \(n \times m\) matrix \(\vec{A}\) ,

$$ \vec{A} = \begin{bmatrix} A_{0,0} & A_{0,1} & \cdots & A_{0,m-1} \\ A_{1,0} & A_{1,1} & \cdots & A_{1,m-1} \\ \\ A_{n-1,0} & A_{n-1,1} & \cdots & A_{n-1,m-1} \end{bmatrix} $$

pivoting \(\vec{A}\) around \(\vec{A}_{a,b}\) gives

$$ \vec{B}_{i,j} = \left( \frac{\vec{A}_{i,b}}{\vec{A}_{a,b}} \right) \vec{A}_{a,j} - \vec{A}_{i,j} \forall i \ne a $$

The pivot element must not be zero. Row \(a\) is left unchanged in \(\vec{B}\) . All elements of \(\vec{B}\) in column \(b\) are zero except for row \(a\) . This is accomplished in liquid with the matrixf_pivot(*A,m,n,i,j) method. For our example \(4 \times 4\) matrix \(\vec{x}\) , pivoting around\(\vec{x}_{1,2}\) gives:

matrixf_pivot(x,4,4,1,2);
  0.37374675  2.65145779  0.00000000  1.80186427
  3.99475002  0.88066000  4.69372988  0.44563001
 -6.70986557  2.19192743  0.00000000 -9.74288940
 -5.03205967  4.16144180  0.00000000  0.53803295

matrixf_mul() (multiplication)

Multiplication of two input matrices \(\vec{A}\) and \(\vec{B}\) is accomplished with the matrixf_mul(*A,ma,na,*B,mb,nb,*C,mc,nc) method, and is not to be confused with matrixf_pmul() in [ref:section-matrix-mathop] . If \(\vec{A}\) is \(m \times n\) and \(\vec{B}\) is \(n \times p\) , then their product is computed as

$$ \bigl( \vec{A} \vec{B} \bigr)_{i,j} = \sum_{r=0}^{n-1} { \vec{A}_{i,r} \vec{B}_{r,j} } $$

Note that the number of columns of \(\vec{A}\) must be equal to the number of rows of \(\vec{B}\) , and that the resulting matrix is of size \(m \times p\) (the number of rows in \(\vec{A}\) and columns in \(\vec{B}\) ).

A =                 B =
    1   2   3           1   2   3
    4   5   6           4   5   6
                        7   8   9
matrixf_mul(A,2,3, B,3,3, C,2,3);

C =
    30  36  42
    66  81  96

Transpose multiplication

liquid also implements transpose-multiplication operations on an\(m \times n\) matrix \(\vec{X}\) , commonly used in signal processing.[ref:section-matrix-trans] describes the difference between the\(\left(\cdot\right)^T\) and\(\left(\cdot\right)^H\) operations. The interface for transpose-multiplications in liquid is tabulated below for an input \(m \times n\) matrix \(\vec{X}\) .

Operation Output Dimensions Interface
\(\vec{X} \vec{X}^T\)\(m \times m\) matrixcf_mul_transpose(x,m,n,xxT)
\(\vec{X} \vec{X}^H\)\(m \times m\) matrixcf_mul_hermitian(x,m,n,xxH)
\(\vec{X}^T\vec{X} \)\(n \times n\) matrixcf_transpose_mul(x,m,n,xTx)
\(\vec{X}^H\vec{X} \)\(n \times n\) matrixcf_transpose_mul(x,m,n,xHx)

Complex math operations

More complex math operations are described here, including matrix inversion, square matrix determinant, Gauss-Jordan elimination, and lower/upper decomposition routines using both Crout's and Doolittle's methods.

matrixf_inv() (inverse)

Matrix inversion is accomplished with the matrixf_inv(*X,m,n) method.

.. footnote
While matrix inversion requires a square matrix, `liquid`
internally checks to ensure $m=n$ on the input size for
$\vec{X}$.

Given an \(n \times n\) matrix \(\vec{A}\) , liquid augments with\(\vec{I}_n\) :

$$ \left[\vec{A}|\vec{I}_n\right] = \left[ \begin{array}{cccc|cccc} A_{0,0} & A_{0,1} & \cdots & A_{0,m-1} & 1 & 0 & \cdots & 0 \\ A_{1,0} & A_{1,1} & \cdots & A_{1,m-1} & 0 & 1 & \cdots & 0 \\ & & & & & & & \\ A_{n-1,0} & A_{n-1,1} & \cdots & A_{n-1,m-1} & 0 & 0 & \cdots & 1 \\ \end{array} \right] $$

Next liquid performs elementary operations to convert to its row-reduced echelon form. The resulting matrix has the identity matrix on the left and\(\vec{A}^{-1}\) on its right, viz

$$ \left[\vec{I}_n|\vec{A}^{-1}\right] = \left[ \begin{array}{cccc|cccc} 1 & 0 & \cdots & 0 & A^{-1}_{0,0} & A^{-1}_{0,1} & \cdots & A^{-1}_{0,m-1} \\ 0 & 1 & \cdots & 0 & A^{-1}_{1,0} & A^{-1}_{1,1} & \cdots & A^{-1}_{1,m-1} \\ & & & & & & & \\ 0 & 0 & \cdots & 1 & A^{-1}_{n-1,0} & A^{-1}_{n-1,1} & \cdots & A^{-1}_{n-1,m-1} \\ \end{array} \right] $$

The matrixf_inv() method uses Gauss-Jordan elimination (see matrixf_gjelim() ) for row reduction and back-substitution. Pivot elements in \(\vec{A}\) with the largest magnitude are chosen to help stability in floating-point arithmetic.

matrixf_inv(x,4,4);
 -0.33453920  0.04643385 -0.04868321  0.23879384
 -0.42204019  0.12152659 -0.07431178  0.06774280
  0.35104612  0.15256262  0.04403552 -0.20177667
  0.13544561 -0.01930523  0.11944833 -0.14921521

matrixf_div()

The matrixf_div(*X,*Y,*Z,*n) method simply computes\(\vec{Z} = \vec{Y}^{-1}\vec{X}\) where \(\vec{X}\) , \(\vec{Y}\) , and \(\vec{Z}\) are all \(n \times n\) matrices.

matrixf_linsolve() (solve linear system of equations)

The matrixf_linsolve(*A,n,*b,*x,opts) method solves a set of \(n\) linear equations \(A\vec{x} = \vec{b}\) where\(A\) is an \(n \times n\) matrix, and\(\vec{x}\) and \(\vec{b}\) are \(n \times 1\) vectors. The opts argument is reserved for future development and can be ignored by setting to NULL .

matrixf_cgsolve() (solve linear system of equations)

The matrixf_cgsolve(*A,n,*b,*x,opts) method solves \(Ax = b\) using the conjugate gradient method where \(A\) is an \(n \times n\) symmetric positive-definite matrix. The opts argument is reserved for future development and can be ignored by setting to NULL . Listed below is a basic example:

A =
  2.9002075   0.1722705   1.3046706   1.8082311
  0.1722705   1.0730995   0.2497573   0.1470398
  1.3046706   0.2497573   0.8930279   1.1471686
  1.8082311   0.1470398   1.1471686   1.5155975
b =
 11.7622252
 -1.0541668
  5.7372437
  8.1291904
matrixf_cgsolve(A,4,4, x_hat, NULL)
x_hat =
  2.8664699
 -1.8786657
  1.1224079
  1.2764599

For a more complete example, see examples/cgsolve_example.c located under the main project directory.

matrixf_det() (determinant)

The matrixf_det(*X,m,n) method computes the determinant of an \(n \times n\) matrix \(\vec{X}\) . In liquid , the determinant is computed by L/U decomposition of\(\vec{A}\) using Doolittle's method (see matrixf_ludecomp_doolittle ) and then computing the product of the diagonal elements of \(\vec{U}\) , viz

$$ \det\left(\vec{A}\right) = \left|\vec{A}\right| = \prod_{k=0}^{n-1}{\vec{U}_{k,k}} $$

This is equivalent to performing L/U decomposition using Crout's method and then computing the product of the diagonal elements of \(\vec{L}\) .

matrixf_det(X,4,4) = 585.40289307

matrixf_ludecomp_crout() (LU Decomposition, Crout's Method)

Crout's method decomposes a non-singular \(n\times n\) matrix \(\vec{A}\) into a product of a lower triangular \(n \times n\) matrix \(\vec{L}\) and an upper triangular \(n \times n\) matrix \(\vec{U}\) . In fact, \(\vec{U}\) is a unit upper triangular matrix (its values along the diagonal are 1). The matrixf_ludecomp_crout(*A,m,n,*L,*U,*P) implements Crout's method.

$$ \vec{L}_{i,k} = \vec{A}_{i,k} - \sum_{t=0}^{k-1}{ \vec{L}_{i,t} \vec{U}_{t,k} } \ \forall k \in \{0,n-1\}, i \in \{k,n-1\} $$ $$ \vec{U}_{k,j} = \left[ \vec{A}_{k,j} - \sum_{t=0}^{k-1}{ \vec{L}_{k,t} \vec{U}_{t,j} } \right] / \vec{L}_{k,k} \ \forall k \in \{0,n-1\}, j \in \{k+1,n-1\} $$
matrixf_ludecomp_crout(X,4,4,L,U,P)
L =
  0.84381998  0.00000000  0.00000000  0.00000000
  3.99475002 12.16227055  0.00000000  0.00000000
  7.28072023 18.49547005 -8.51144791  0.00000000
  6.07741022 13.23228073 -6.81350422 -6.70173073
U =
  1.00000000 -2.82410932  1.69539714 -1.97440207
  0.00000000  1.00000000 -0.17093502  0.68514121
  0.00000000  0.00000000  1.00000000 -1.35225296
  0.00000000  0.00000000  0.00000000  1.00000000

matrixf_ludecomp_doolittle() (LU Decomposition, Doolittle's Method)

Doolittle's method is similar to Crout's except it is the lower triangular matrix that is left with ones on the diagonal. The update algorithm is similar to Crout's but with a slight variation: the upper triangular matrix is computed first. The matrixf_ludecomp_doolittle(*A,m,n,*L,*U,*P) implements Doolittle's method.

$$ \vec{U}_{k,j} = \vec{A}_{k,j} - \sum_{t=0}^{k-1}{ \vec{L}_{k,t} \vec{U}_{t,j} } \ \forall k \in \{0,n-1\}, j \in \{k,n-1\} $$ $$ \vec{L}_{i,k} = \left[ \vec{A}_{i,k} - \sum_{t=0}^{k-1}{ \vec{L}_{i,t} \vec{U}_{t,k} } \right] / \vec{U}_{k,k} \ \forall k \in \{0,n-1\}, i \in \{k+1,n-1\} $$

Here is a simple example:

matrixf_ludecomp_doolittle(X,4,4,L,U,P)
L =
  1.00000000  0.00000000  0.00000000  0.00000000
  4.73412609  1.00000000  0.00000000  0.00000000
  8.62828636  1.52072513  1.00000000  0.00000000
  7.20225906  1.08797777  0.80051047  1.00000000
U =
  0.84381998 -2.38303995  1.43060994 -1.66603994
  0.00000000 12.16227150 -2.07895803  8.33287334
  0.00000000  0.00000000 -8.51144791 11.50963116
  0.00000000  0.00000000  0.00000000 -6.70172977

matrixf_qrdecomp_gramschmidt() (QR Decomposition, Gram-Schmidt algorithm)

liquid implements Q/R decomposition with the matrixf_qrdecomp_gramschmidt(*A,m,n,*Q,*R) method which factors a non-singular \(n \times n\) matrix \(\vec{A}\) into product of an orthogonal matrix \(\vec{Q}\) and an upper triangular matrix\(\vec{R}\) , each \(n \times n\) . That is, \(\vec{A} = \vec{Q}\vec{R}\) where\(\vec{Q}^T\vec{Q} = \vec{I}_n\) and\(\vec{R}_{i,j} = 0\,\, \forall_{i \gt j}\) . Building on the previous example for our test \(4 \times 4\) matrix\(\vec{X}\) , the Q/R factorization is

matrixf_qrdecomp_gramschmidt(X,4,4,Q,R)
Q =
  0.08172275 -0.57793844  0.57207584  0.57622749
  0.38688579  0.63226062  0.66619849 -0.08213031
  0.70512730  0.13563085 -0.47556636  0.50816941
  0.58858842 -0.49783322  0.05239720 -0.63480729
R =
 10.32539940 -3.62461853  3.12874746  6.70309162
  0.00000000  3.61081028  1.62036073  2.78449297
  0.00000000  0.00000000  3.69074893 -5.34197950
  0.00000000  0.00000000  0.00000000  4.25430155

matrixf_chol() (Cholesky Decomposition)

Compute Cholesky decomposition of an \(n \times n\) symmetric/Hermitian positive-definite matrix as \(\vec{A} = \vec{L}\vec{L}^T\) where \(\vec{L}\) is \(n \times n\) and lower triangular. An \(n \times n\) matrix is positive definite if\(\Re\{v^T \vec{A} v\} \gt 0\) for all non-zero vectors \(v\) . Note that \(\vec{A}\) can be either complex or real. Shown below is an example of the Cholesky decomposition of a\(4 \times 4\) positive definite real matrix.

A =
  1.0201000  -1.4341999   0.3232000  -1.0302000
 -1.4341999   2.2663999   0.5506001   1.2883999
  0.3232000   0.5506001   4.2325001  -1.4646000
 -1.0302000   1.2883999  -1.4646000   5.0101995
matrixf_chol(A,4,Lp)
  1.0100000   0.0000000   0.0000000   0.0000000
 -1.4200000   0.5000000   0.0000000   0.0000000
  0.3200000   2.0100000   0.3000003   0.0000000
 -1.0200000  -0.3199999  -1.6499993   1.0700010

matrixf_gjelim() (Gauss-Jordan Elimination)

The matrixf_gjelim(*X,m,n) method in liquid performs the Gauss-Jordan elimination on a matrix \(\vec{X}\) . Gauss-Jordan elimination converts a \(m \times n\) matrix into its row-reduced echelon form using elementary matrix operations (e.g. pivoting). This can be used to solve a linear system of \(n\) equations\(\vec{A}\vec{x} = \vec{b}\) for the unknown vector \(\vec{x}\)

$$ \begin{bmatrix} A_{0,0} & A_{0,1} & \cdots & A_{0,n-1} \\ A_{1,0} & A_{1,1} & \cdots & A_{1,n-1} \\ \\ A_{n-1,0} & A_{n-1,1} & \cdots & A_{n-1,n-1} \end{bmatrix} \begin{bmatrix} x_{0} \\ x_{1} \\ \\ x_{n-1} \end{bmatrix} = \begin{bmatrix} b_{0} \\ b_{1} \\ \\ b_{n-1} \end{bmatrix} $$

The solution for \(\vec{x}\) is given by inverting \(\vec{A}\) and multiplying by \(\vec{b}\) , viz

$$ \vec{x} = \vec{A}^{-1}\vec{b} $$

This is also equivalent to augmenting \(\vec{A}\) with \(\vec{b}\) and converting it to its row-reduced echelon form. If \(\vec{A}\) is non-singular the resulting \(n \times n+1\) matrix will hold\(\vec{x}\) in its last column. The row-reduced echelon form of a matrix is computed in liquid using the Gauss-Jordan elimination algorithm, and can be invoked as such:

Ab =
  0.84381998 -2.38303995  1.43060994 -1.66603994  0.91488999
  3.99475002  0.88066000  4.69372988  0.44563001  0.71789002
  7.28072023 -2.06608009  0.67074001  9.80657005  1.06552994
  6.07741022 -3.93098998  1.22826004 -0.42142001 -0.81707001
matrixf_gjelim(Ab,4,5)
  1.00000000 -0.00000000  0.00000000 -0.00000000 -0.51971692
 -0.00000000  1.00000000  0.00000000  0.00000000 -0.43340963
 -0.00000000 -0.00000000  1.00000000 -0.00000000  0.64247853
  0.00000000 -0.00000000 -0.00000000  0.99999994  0.35925382

Notice that the result contains \(\vec{I}_n\) in its first \(n\) rows and \(n\) columns (to within machine precision).

.. footnote
row permutations (swapping) might have occurred.