Let be the instantaneous forward rate. Suppose the dynamic is
The idea of Heath-Jarrow-Morton is that the no-arbitrage drifts of the forward rates are uniquely specified once their volatilities and correlations are assigned. In particular, under risk-neutral measure we must have
Example. For constant and , consider the short rate dynamic
Zero bonds price and forward rate dynamic, respectively, are
1. Zero Bonds in HJM
We now show that in a HJM mode (1) the SDE for the zero-coupon bond price is
One might do the following calcucaltion to get the above equation.
Note that (1) implies
which is nothing but an identity
For simplicity we denote the integral by in the rest of this section. Then
Prop 1 In a HJM model, we have
For each iterated integral above we integrate once and obtain the formula.
Let be the bank account, i.e.,
where is the the instantaneous short rate at time . Note that
Then we do the same calculation as the proposition above and obtain
Cor 2 In a HJM model, we have
Rem 1 As a result, we now write down explicitly the formula for change of numeraires from the bank account to the zero bond for a fixed a maturity .
This is an exponential martingale with . It follows that
where is a Brownian motion under -forward measure.
Posted in financialMath
The first question that puzzled me when I learned linear algebra is what is the mathematical idea of putting numbers into a rectangular table. In fact, I have been holding the following idea ever since that to study linear transformation, linear substitution on linear/quadratic forms, etc., people use the “method” of matrix and matrix product.
Real symmetric matrices are special among all kinds of matrices. From the point view of linear transformation, they represent self-adjoint operators over a real inner product space. However, one doesn’t have to consider “inner product” to tell whether a matrix is symmetric as symmetric matrix just means off-diagonal entries are symmetric across the diagonal.
real eigenvalues and orthogonal eigenvectors
real positive-definite symmetric matrix (cholesky decomposition)
the inverse of a symmetric matrix is also symmetric
the product of two symmetric matrix is not necessary symmetric
for any matrix , is positive-semidefinite.
1. Reflection and QR Decomposition
A reflection about a hyperplane containing the origin is usually called householder reflection and can be written as
where is a unit vector that is orthogonal to the hyperplane.
2. Rotation and Jacobi Eigenvalue Algorithm
Jacobi eigenvalue algorithm is an iterative method for Schur decomposition of a real symmetric matrix, i.e., the eigenvalues and eigenvectors of the matrix. The algorithm uses the rotations like
The basic idea can be illustrated by a matrix . Let and . Then for some suitable we can have
Note that in equation (1) we must have
One proof to the above inequality is to consider the Frobenius norm of matrix. As is similar to , they have the same Frobenius norm. Then the inequality (2) follows.
It is also a good exercise problem to prove (2) in the context of high school algebra. [hint: first, we have ; second, note that if , then the same inequality holds for the sum of squares.]
uBLAS is a library that distributed within Boost Libraries and provides templated C++ classes for vectors and matrices. “The design and implementation unify mathematical notation via operator overloading and efficient code generation via expression templates.” Although uBLAS provides the C++ infrastructure on which safe and efficient linear algebra algorithms could be built, few linear algebra algorithms have been included in the library. uBLAS has only a triangular solver and an implementation of LU factorisation. Because no documentation about the file lu.hpp can be found, I have to take a look at the source codes.
solve(). A typical interface is
where and are lower triangular matrices.
lu_factorize(). For matrices with nonsingular diagonals, one can simply use
For general matrices, one can use
where is, by default, a unbounded vector , i.e., vector <std::size_t, unbounded_array<std::size_t> >.
lu_substitute(). Similarly, there are mainly two interfaces.
where has to be claimed as a permutation_matrix < >.
2. Algorithm for lu_factorize
After performing the function lu_factorize(), the matrix would change. However, my question was the function should “return” two matrices as it is about factorizing one matrix into the product of two matrices. In fact, the algorithm tries to solve the following equation when is a 3-by-4 matrix.
Indeed, after executing lu_factorize(), the solutions to the question marks will replace entries in correspondingly.
3. Examples: Inverse and Determinant
First, combining lu_factorize and lu_substitute can produce the inverse of matrix.
- input a matrix with nonsingular diagonals
- = identity matrix
- lu_factorize ()
- lu_substituteconst matrixdouble, matrixdouble (, )
- output the inverse
For general nonsingular matrices, we can use the combination of lu_factorize(, ) and lu_substitute(, , ). Indeed, the vector records all row swaps being made during the factorization when some diagonal entry becomes zero.
There is an interesting application of vector . Note that the determinant of the upper triangular matrix in the equation (1) is equal to the determinant of the original matrix up to a sign — every once we swap two rows, we change the sign; and the number of i such that is equal to the number of swaps being made during the factorization.
Thus, performing lu_factorize(, ) and counting the number, we will find the determinant.
- input a matrix
- lu_factorize (, )
- , where
- output the determinant of
The SDEs of Heston model are
The interesting thing about this two-dimension system is that the characteristic function of , under the measure associated with the numeraire , can be obtained through direct calculation. (I learned this from [Zhu].)
Now we decompose into two parts , where . Then the task of evaluating the expectation above is reduced to evaluate
Let be the algebra associated with the Brownian motion up to time , and denote by and the following two expressions respectively
First, note that we have
Second, by integrating in (1) we obtain
Rearranging this equation yields
Then it follows that
According to the Feynman-Kac theorem, the expectation fulfills the following PDE