HJM Model

Let {f(t,T)} be the instantaneous forward rate. Suppose the dynamic is

\displaystyle  df(t,T)=\mu(t,T)dt+\sigma(t,T)\cdot\mathbf{dW_t}

The idea of Heath-Jarrow-Morton is that the no-arbitrage drifts of the forward rates are uniquely specified once their volatilities and correlations are assigned. In particular, under risk-neutral measure we must have

\displaystyle  \mu(t,T)=\sigma(t,T)\int_t^T\sigma(t,s)ds=\sum_i^N\sigma_i(t,T)\int_t^T\sigma_i(t,s)ds \ \ \ \ \ (1)

Example. For constant {\theta} and {\sigma}, consider the short rate dynamic

\displaystyle  dr_t=\theta dt+\sigma dW_t

Zero bonds price and forward rate dynamic, respectively, are

\displaystyle  \begin{array}{rcl}  && P(t,T)=\exp\left(\frac{\sigma^2}{6}(T-t)^3-\frac{\theta}{2}(T-t)^2-(T-t)r_t\right)\\ &&f(t,T)=-\frac{\partial\ln P(t,T)}{\partial T}=-\frac{\sigma^2}{2}(T-t)^2+\theta(T-t)+r_t\\ &&df(t,T)=\sigma^2(T-t)dt+\sigma dW_t \end{array}

1. Zero Bonds in HJM

We now show that in a HJM mode (1) the SDE for the zero-coupon bond price is

\displaystyle  \frac{dP(t,T)}{P(t,T)}=f(t,t)dt-\left(\int_t^T\sigma(t,s)ds\right)\mathbf{dW_t}.

One might do the following calcucaltion to get the above equation.

\displaystyle  \begin{array}{rcl}  && \frac{dP(t,T)}{P(t,T)}=-dY(t,T)+\frac12\langle dY,dY\rangle,\mbox{ where }Y(t,T)=\int_t^Tf(t,u)du\\ && dY(t,T)=\left(\int_t^T\mu(t,s)ds\right)dt+\left(\int_t^T\sigma(t,s)ds\right)\mathbf{dW_t}-f(t,t)dt \end{array}

Note that (1) implies

\displaystyle  \int_t^T\mu(t,s)ds=\frac12\left(\int_t^T\sigma(t,s)ds\right)^2,

which is nothing but an identity

\displaystyle  \int_t^T\left(g(s)\int_t^sg(u)du\right)ds=\frac12\left(\int_t^Tg(s)ds\right)^2.

For simplicity we denote the integral {\int_t^T\sigma(t,u)du} by {-\nu(t,T)} in the rest of this section. Then

\displaystyle  -\nu_2(t,T):=\frac{\partial}{\partial T}(-\nu)=\sigma(t,T)

Prop 1 In a HJM model, we have

\displaystyle  P(t,T)=\frac{P(0,T)}{P(0,t)}\exp\left(-\frac12\int_0^t\left(\nu^2(s,T)-\nu^2(s,t)\right)ds+\int_0^t\nu(s,T)-\nu(s,t)dW_s\right).

Proof.

\displaystyle  \begin{array}{rcl}  &P(t,T)&=\exp\left(-\int_t^T f(t,u)du\right)\\ && =\exp\left(-\int_t^T\left[f(0,u)+\int_0^t(-\nu_2(s,u))(-\nu(s,u))ds+\int_0^t(-\nu_2(s,u))dW_s\right]du\right)\\ && =\exp\left(-\int_t^Tf(0,u)du\right)\exp\left(-\int_0^t\int_t^T\nu_2(s,u)\nu(s,u)duds+\int_0^t\int_t^T\nu_2(s,u)dudW_s\right) \end{array}

For each iterated integral above we integrate once and obtain the formula. \Box

Let {B(t)} be the bank account, i.e.,

\displaystyle  B(t)=\exp\left(\int_0^tr(s)ds\right),

where {r(t)=f(t,t)} is the the instantaneous short rate at time {t}. Note that

\displaystyle  r(t)=f(0,t)+\int_0^t\nu_2(s,t)\nu(s,t)ds-\int_0^t\nu_2(s,t)dW_s.

Then we do the same calculation as the proposition above and obtain
Cor 2 In a HJM model, we have

\displaystyle  \frac{1}{B(t)}=\exp\left(-\int_0^tr(s)ds\right)=P(0,t)\exp\left(-\frac12\int_0^t\nu^2(s,t)ds+\int_0^t\nu(s,t)dW_s\right).

Rem 1 As a result, we now write down explicitly the formula for change of numeraires from the bank account {B(t)} to the zero bond {P(t,T)} for a fixed a maturity {T}.

\displaystyle  \xi_t=\frac{P(t,T)}{P(0,T)}\frac{B(0)}{B(t)}=\exp\left(-\frac12\int_0^t\nu^2(s,T)ds+\int_0^t\nu(s,T)dW_s\right).

This is an exponential martingale with {d\xi_t=\nu(t,T)\xi_tdW_t}. It follows that

\displaystyle  dW_t^T=dW_t-\nu(t,T)dt,

where {W_t^T} is a Brownian motion under {T}-forward measure.

Posted in financialMath | Tagged | Leave a comment

Polynomial Interpolation

Given a set of {n} distinct points {x_i} and numbers {y_i}, one is looking for a polynomial {p} of degree at most {n-1} such that {p(x_i)=y_i}. It is easy to prove that such a polynomial {p} exists and is unique.

1. Lagrange and Newton Forms

The interpolation polynomial {p} can be written down immediately.

\displaystyle  p(x)=y_1\frac{(x-x_2)(x-x_3)\cdots(x-x_n)}{(x_1-x_2)(x_1-x_3)\cdots(x_1-x_n)}+\cdots +y_n\frac{(x-x_1)(x-x_2)\cdots(x-x_{n-1})}{(x_n-x_1)(x_n-x_2)\cdots(x_n-x_{n-1})}

Note that the {i-}th summand above takes the value {y_i} at point {x_i} and {0} at other points. For {i=1,\dots,n}, polynomials

\displaystyle  \prod_{l\neq i}\frac{x-x_l}{x_i-x_l}

are called Lagrange basis polynomials as they span the linear space of all polynomials of degree at most {n-1}. In the language of linear algebra, the interpolation polynomial {p} has the coordinates {(y_1,\dots,y_n)} with respect to the basis.

1.1. Divided Difference and Newton Form

Consider another basis of polynomials of degree at most {n-1} (known as Newton basis polynomials)

\displaystyle  \begin{array}{rcl}  && f_0=1\\ && f_1=(x-x_1)\\ && f_2=(x-x_1)(x-x_2)\\ && f_3=(x-x_1)(x-x_2)(x-x_3)\\ && \qquad\cdots\cdots\\ && f_{n-1}=(x-x_1)(x-x_2)\cdots(x-x_{n-1}) \end{array}

There is an interesting way to express the interpolation polynomial {p} as a linear combination of {f_i}. The idea is to consider the divided difference, which is defined as follows.

Given a polynomial {p} and {n} distinct points {x_i}, let {y_i=p(x_i)}. Then one can calculate recursively the divided difference.

\displaystyle  \begin{array}{rcl} && [y_1,y_2]=\displaystyle\frac{y_2-y_1}{x_2-x_1}\\ && [y_1,y_2,y_3]=\displaystyle\frac{[y_2,y_3]-[y_1,y_2]}{x_3-x_1}\\ && [y_1,y_2,y_3,y_4]=\displaystyle\frac{[y_2,y_3,y_4]-[y_1,y_2,y_3]}{x_4-x_1} \end{array}

Prop 1 For {1\leq k\leq n}, let {f_k} be the {k-}th Newton basis polynomial. Then

\displaystyle  [f_k(x_1),f_k(x_2),\dots,f_k(x_{l})]=\left\{\begin{array}{lr}0,\, & l\neq k+1\\ 1,\, & l=k+1 \end{array}\right.

It then follows that

\displaystyle  p(x)=y_1+[y_1,y_2]f_1+[y_1,y_2,y_3]f_2+\cdots+[y_1,\dots,y_n]f_{n-1}

2. Hermite

For {x\in[x_i,x_{i+1}]} and {\delta=x-x_i}, the cubic interpolation polynomial has the form

\displaystyle  p(x)=c_0+c_1\delta+c_2\delta^2+c_3\delta^3,

where

\displaystyle  \begin{array}{rcl}  && c_0=f_i,\qquad c_1=\dot{f_i},\\ && c_2=(3S_{i+1/2}-\dot{f}_{i+1}-2\dot{f_i})/\Delta x_{i+1/2}\\ && c_3=(-2S_{i+1/2}+\dot{f}_{i+1}+\dot{f_i})/(\Delta x_{i+1/2})^2 \end{array}

Posted in numerialMath | Tagged , , | Leave a comment

Symmetric Matrices in Numerical Linear Algebra

The first question that puzzled me when I learned linear algebra is what is the mathematical idea of putting numbers into a rectangular table. In fact, I have been holding the following idea ever since that to study linear transformation, linear substitution on linear/quadratic forms, etc., people use the “method” of matrix and matrix product.

\displaystyle  \begin{array}{rcl}  \mbox{Linear transformation} &\Longleftrightarrow & \left\{\begin{array}{l}\mbox{matrix}\\ \mbox{matrix product (fundamental transformation)}\\ M\sim T^{-1}MT \end{array}\right.\\ \left\{\begin{array}{l} \mbox{invariant space} \\ \mbox{eigenvalue, eigenvector} \end{array}\right. & \Longrightarrow & \mbox{Rational/Jordan canonical forms} \end{array}

Real symmetric matrices are special among all kinds of matrices. From the point view of linear transformation, they represent self-adjoint operators over a real inner product space. However, one doesn’t have to consider “inner product” to tell whether a matrix is symmetric as symmetric matrix just means off-diagonal entries are symmetric across the diagonal.

  • real eigenvalues and orthogonal eigenvectors
  • real positive-definite symmetric matrix (cholesky decomposition)
  • the inverse of a symmetric matrix is also symmetric
  • the product of two symmetric matrix is not necessary symmetric
  • for any matrix {M}, {M^{T}M} is positive-semidefinite.

    1. Reflection and QR Decomposition

    A reflection about a hyperplane containing the origin is usually called householder reflection and can be written as

    \displaystyle  I-2v\cdot v^{T},

    where {v} is a unit vector that is orthogonal to the hyperplane.

  • orthogonal
  • symmetric

    2. Rotation and Jacobi Eigenvalue Algorithm

    Jacobi eigenvalue algorithm is an iterative method for Schur decomposition of a real symmetric matrix, i.e., the eigenvalues and eigenvectors of the matrix. The algorithm uses the rotations like

    \displaystyle  \begin{pmatrix} \begin{pmatrix} c & -s\\ s & c \end{pmatrix} & 0\\ 0 & I_{n-2}\\ \end{pmatrix}

    The basic idea can be illustrated by a {2\times2} matrix {\left(\begin{smallmatrix}a_1 & b \\ b & a_2\end{smallmatrix}\right)}. Let {c=\cos\theta} and {s=\sin\theta}. Then for some suitable {\theta} we can have

    \displaystyle   \begin{pmatrix} c & s\\ -s & c \end{pmatrix} \begin{pmatrix} a_1 & b \\ b & a_2 \end{pmatrix} \begin{pmatrix} c & -s\\ s & c \end{pmatrix}= \begin{pmatrix} a_1^{\prime} & 0\\ 0 & a_2^{\prime} \end{pmatrix}, \ \ \ \ \ (1)

    where

    \displaystyle  (c^2-s^2)b-sc(a_1-a_2)=0.

    Note that in equation (1) we must have

    \displaystyle   a_1^2+a_2^2\leq {a_1^{\prime}}^2+{a_2^{\prime}}^2. \ \ \ \ \ (2)

    One proof to the above inequality is to consider the Frobenius norm of matrix. As {\left(\begin{smallmatrix}a_1 & b \\ b & a_2\end{smallmatrix}\right)} is similar to {\left(\begin{smallmatrix}a_1^{\prime} & 0 \\ 0 & a_2^{\prime}\end{smallmatrix}\right)}, they have the same Frobenius norm. Then the inequality (2) follows.

    It is also a good exercise problem to prove (2) in the context of high school algebra. [hint: first, we have {a_1+a_2= a_1^{\prime}+a_2^{\prime}}; second, note that if {|a_1-a_2|\leq |a_1^{\prime}-a_2^{\prime}|}, then the same inequality holds for the sum of squares.]

  • Posted in numerialMath | Tagged , , | Leave a comment

    LU factorisation in uBLAS

    uBLAS is a library that distributed within Boost Libraries and provides templated C++ classes for vectors and matrices. “The design and implementation unify mathematical notation via operator overloading and efficient code generation via expression templates.” Although uBLAS provides the C++ infrastructure on which safe and efficient linear algebra algorithms could be built, few linear algebra algorithms have been included in the library. uBLAS has only a triangular solver and an implementation of LU factorisation. Because no documentation about the file lu.hpp can be found, I have to take a look at the source codes.

    1. Interfaces

  • solve(). A typical interface is

    \displaystyle  typename\quad solve (m1, m2, lower\_tag ()),

    where {m1} and {m2} are lower triangular matrices.

  • lu_factorize(). For matrices with nonsingular diagonals, one can simply use

    \displaystyle  typename\quad lu_factorize(m)

    For general matrices, one can use

    \displaystyle  typename\quad lu_factorize(m, v),

    where {v} is, by default, a unbounded vector {(0,1,2,\cdots)}, i.e., vector <std::size_t, unbounded_array<std::size_t> >.

  • lu_substitute(). Similarly, there are mainly two interfaces.

    \displaystyle  \begin{array}{rll}  void &lu\_substitute(A, Z)&\\ void & lu\_substitute(A, pm, Z),& \end{array}

    where {pm} has to be claimed as a permutation_matrix < >.

    2. Algorithm for lu_factorize

    After performing the function lu_factorize({m}), the matrix {m} would change. However, my question was the function should “return” two matrices as it is about factorizing one matrix into the product of two matrices. In fact, the algorithm tries to solve the following equation when {m} is a 3-by-4 matrix.

    \displaystyle   \begin{pmatrix} a_1 & b_1 & c_1 & d_1 \\ a_2 & b_2 & c_2 & d_2 \\ a_3 & b_3 & c_3 & d_3 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0\\ \frac{a_2}{a_1} & 1 & 0\\ \frac{a_3}{a_1} & ? & 1 \end{pmatrix} \cdot \begin{pmatrix} a_1 & b_1 & c_1 & d_1\\ 0 & ? & ? & ?\\ 0 & 0 & ? & ? \end{pmatrix} \ \ \ \ \ (1)


    Indeed, after executing lu_factorize({m}), the solutions to the question marks will replace entries {\left(\begin{smallmatrix}b_2 & c_2 & d_2 \\ b_3 & c_3 & d_3\end{smallmatrix}\right)} in {m} correspondingly.

    3. Examples: Inverse and Determinant

    First, combining lu_factorize and lu_substitute can produce the inverse of matrix.

    1. input a matrix {m} with nonsingular diagonals
    2. {n} = identity matrix
    3. lu_factorize ({m})
    4. lu_substitute{<}const matrix{<}double{>}, matrix{<}double{>} {>} ({m}, {n})
    5. output the inverse {n}

    For general nonsingular matrices, we can use the combination of lu_factorize({m}, {pm}) and lu_substitute({m}, {pm}, {n}). Indeed, the vector {pm} records all row swaps being made during the factorization when some diagonal entry becomes zero.

    There is an interesting application of vector {pm}. Note that the determinant of the upper triangular matrix in the equation (1) is equal to the determinant of the original matrix up to a sign — every once we swap two rows, we change the sign; and the number of i such that {pm[i]\neq i} is equal to the number of swaps being made during the factorization.

    Thus, performing lu_factorize({m}, {pm}) and counting the number, we will find the determinant.

    1. input a matrix {m}
    2. lu_factorize ({m}, {pm})
    3. {\displaystyle \text{det}=\prod (-1)^{k_i} m(i,i)}, where {\displaystyle k_i=\left\{\begin{matrix}0 & pm[i]=i \\ 1 & pm[i]\neq i\end{matrix}\right.}
    4. output the determinant \text{det} of {m}
  • Posted in numerialMath | Tagged , | Leave a comment

    Quanto Adjustments

    Let {X_t} be the exchange rate process. That is, one unit of the foreign currency is worth {X_t} units of domestic currency at time {t}, and let {Y^d_S} be a domestic floating interest rate which is fixed in the market at time {S}. Consider a quanto product in which {Y_S^d} would be paid at time {T} in foreign units. Then we need to evaluate the following expectation under foreign {T}-forward measure

    \displaystyle  P^f(0,T){\mathbb E}^f[Y^d_S]. \ \ \ \ \ (1)

    1. One Approach

    Suppose the SDE for the process {Y_t^d} under the domestic {T}-forward measure is

    \displaystyle  dY_t^d=\sigma_Y Y_t^d dW_t^Y.

    We derive the SDE for {Y^d_t} under foreign {T}-forward measure as follows.

    Recall that the Radon-Nikodym derivative {\frac{dQ^T_f}{dQ^T}} defining the change of measure between the foreign {T}-forward measure and the domestic {T}-forward measure is given by

    \displaystyle  \frac{dQ^T_f}{dQ^T}= \frac{X_TP^d(0,T)}{X_0P^f(0,T)}.

    Then we have a martingale under the domestic {T}-forward measure

    \displaystyle  \xi_t:=\frac{X_t P^f(t,T)}{X_0 P^f(0,T)}\frac{P^d(0,T)}{P^d(t,T)}.

    Assume that {\xi_t} employ a log-normal volatility, i.e., {\xi_t} satisfies the equation

    \displaystyle  d\xi_t= \sigma_{\textrm{fx}}\xi_t dW_t^{\textrm{fx}}.

    Applying Girsanov’s theorem, we get the SDE for {Y^d_t} under foreign {T}-forward measure

    \displaystyle  dY_t^d=\frac{1}{\xi_t} d\xi_t\cdot dY_t^d+\sigma_Y Y_t^d d\widetilde{W}_t^Y=\sigma_{\textrm{fx}}\sigma_Y\rho Y_t^ddt+\sigma_Y Y_t^d d\widetilde{W}_t^Y,

    where {\rho} is the correlation between {dW_t^Y} and {dW_t^{\textrm{fx}}}. Solving the equation and we obtain

    \displaystyle  {\mathbb E}^f[Y^d_S]=Y_0^d\exp(\rho\sigma_{\textrm{fx}}\sigma_Y S).

    2. Another Approach

    In the paper [Quanto-Adjustments], the expection (1) is evaluated through the expectations under the domestic measure associated with the numeraire {N_t}

    \displaystyle  P^f(0,T){\mathbb E}^f[Y^d_S]=\frac{N_0}{X_0}\cdot{\mathbb E}\left[Y^d_S\frac{X_S P^f(S,T)}{N_S}\right]= \frac{N_0}{X_0}\cdot{\mathbb E}\left[Y^d_S\frac{X_S P^f(S,T)}{P^d(S,T)}\frac{P^d(S,T)}{N_S}\right]. \ \ \ \ \ (2)

    We proceed on some assumptions for variables in the equation (2).

    • The distributions of {Y_S^d} and {\displaystyle\frac{X_S P^f(S,T)}{P^d(S,T)}} are both lognormal under the selected domestic measure.

      \displaystyle  Y_S^d =Y_0^d\exp\left(-\frac12\sigma_Y^2 S+ \sigma_Y W_S\right),\; Y_0^d={\mathbb E}[Y_S^d]

      \displaystyle  \frac{X_S P^f(S,T)}{P^d(S,T)} =X^{*}_0\exp\left(-\frac12\sigma_{\textrm{fx}}^2 S + \sigma_{\textrm{fx}}W_S^{\textrm{fx}}\right),\; X^{*}_0={\mathbb E}\left[\frac{X_0 P^f(0,T)}{P^d(0,T)}\right].

    • The last term on the right hand side of equation (2) follow a linear model \begin{equation*} \displaystyle\frac{P^d(S,T)}{N_S}=\alpha + \beta Y_S^d. \end{equation*}

    It follows that

    \displaystyle  \begin{array}{rl} &\quad\frac{X_0 P^f(0,T)}{N^d_0}={\mathbb E}\left[\frac{X_S P^f(S,T)}{N^d_S}\right]={\mathbb E}\left[\frac{X_S P^f(S,T)}{P^d(S,T)}\frac{P^d(S,T)}{N_S}\right]\\ \\ &={\mathbb E}\left[X^{*}_0\exp\left(-\frac12\sigma_{\textrm{fx}}^2 S + \sigma_{\textrm{fx}}W_S^{\textrm{fx}}\right)\left(\alpha + \beta Y_0^d\exp\left(-\frac12\sigma_Y^2 S+ \sigma_Y W_S\right) \right)\right]\\ \\ &=X^{*}_0\left(\alpha+\beta Y_0^d\exp(\rho\sigma_{\textrm{fx}}\sigma_Y S)\right), \end{array}

    where {\rho} is the correlation between the driving Brownian motions {W^{\textrm{fx}}} and {W}. We then have

    \displaystyle  X^{*}_0=\frac{X_0 P^f(0,T)}{N^d_0\left(\alpha+\beta Y_0^d\exp(\rho\sigma_{\textrm{fx}}\sigma_Y S)\right)}.

    Now we can evaluate the expectations in equation (2) and obtain

    \displaystyle  \begin{array}{rl} &\quad {\mathbb E}^f[Y^d_S]=\frac{N_0}{P^f(0,T) X_0} \cdot{\mathbb E}\left[Y^d_S\frac{X_S P^f(S,T)}{P^d(S,T)}\frac{P^d(S,T)}{N_S}\right]\\ \\ &=\frac{N_0}{P^f(0,T) X_0}Y_0^d\frac{X_0 P^f(0,T)}{N^d_0\left(\alpha+\beta Y_0^d\exp(\rho\sigma_{\textrm{fx}}\sigma_Y S)\right)}{\mathbb E}\left[\exp\left(-\frac12\sigma_Y^2 S+ \sigma_Y W_S\right)\right.\\ \\ &\qquad\left.\exp\left(-\frac12\sigma_{\textrm{fx}}^2 S + \sigma_{\textrm{fx}}W_S^{\textrm{fx}}\right)\left(\alpha + \beta Y_0^d\exp\left(-\frac12\sigma_Y^2 S+ \sigma_Y W_S\right)\right)\right]\\ \\ &=Y_0^d\frac{\displaystyle  \exp(\rho\sigma_{\textrm{fx}}\sigma_Y S)\left(\alpha+ \beta Y_0^d\exp(\rho\sigma_{\textrm{fx}}\sigma_Y S+\sigma_Y^2 S)\right)}{\displaystyle  \left(\alpha+\beta Y_0^d\exp(\rho\sigma_{\textrm{fx}}\sigma_Y S)\right)}. \end{array}

    Posted in financialMath | Tagged , | Leave a comment

    Characteristic Functions in Heston Model

    The SDEs of Heston model are

    \displaystyle  \left\{\begin{array}{l} dS(t)=r(t)S(t)dt+\sqrt{V(t)}S(t)dW(t)\\ \quad\left(or, dx(t)=\left(r(t)-\frac{V(t)}{2}\right)dt + \sqrt{V(t)}dW(t),\;\mbox{where }x(t)=\ln S(t)\right)\\ \\ dV(t)=\kappa(\theta-V(t))dt + \sigma\sqrt{V(t)}dZ(t)\\ dW(t)dZ(t)=\rho dt \end{array}\right. \ \ \ \ \ (1)

    The interesting thing about this two-dimension system is that the characteristic function {f_1(\phi)} of {x_T}, under the measure associated with the numeraire {S_T}, can be obtained through direct calculation. (I learned this from [Zhu].)

    \displaystyle  \begin{array}{ll} f_1(\phi) =\left[\frac{e^{x_T}}{e^{x_0}e^{rT}}\exp(i\phi x_T)\right]={\mathbb E}[\exp(-rT-x_0+(1+i\phi)x_T)] \\ \\ =\left[\exp(i\phi(x_0+rT))\exp\left((1+i\phi)(-\frac12\int_0^TV(t)dt+\int_0^T\sqrt{V(t)}dW(t))\right)\right] \\ \end{array}

    Now we decompose {W} into two parts { W=\rho Z+\sqrt{1-\rho^2}W^{\prime}}, where {dZdW^{\prime}=0}. Then the task of evaluating the expectation above is reduced to evaluate

    \displaystyle  \textstyle{\mathbb E}\left[\exp\left((1+i\phi)\left(-\frac12\int_0^T V(t)dt+\rho\int_0^T\sqrt{V(t)}dZ(t)+\sqrt{1-\rho^2}\int_0^T\sqrt{V(t)}dW^{\prime}(t)\right)\right)\right]

    Let {{\mathcal{F}}} be the {\sigma-} algebra associated with the Brownian motion {Z(t)} up to time {T}, and denote by {A} and {B} the following two expressions respectively

    \displaystyle  \begin{array}{l} \exp\left((1+i\phi)\left(-\frac12\int_0^TV(t)dt+\rho\int_0^T\sqrt{V(t)}dZ(t)\right)\right),\\ \\ \exp\left((1+i\phi)\sqrt{1-\rho^2}\int_0^T\sqrt{V(t)}dW^{\prime}(t)\right) \end{array}

    First, note that we have

    \displaystyle  {\mathbb E}(B|{\mathcal{F}}) =\exp\left(\frac12(1+i\phi)^2(1-\rho^2)\int_0^T V(t)dt\right)

    Second, by integrating {V(t)} in (1) we obtain

    \displaystyle  V_T-V_0=\kappa\theta T-\kappa\int_0^T V(t)dt+\sigma\int_0^T\sqrt{V(t)}dZ(t).

    Rearranging this equation yields

    \displaystyle  \int_0^T\sqrt{V(t)}dZ(t)=\frac{1}{\sigma}\left(V_T-V_0-\kappa\theta T+\kappa\int_0^TV(t)dt\right).

    Then it follows that

    \displaystyle  \begin{array}{rl} f_1(\phi)=&\exp(i\phi(x_0+rT)){\mathbb E}(AB)=\exp(i\phi(x_0+rT)){\mathbb E}(A\cdot{\mathbb E}(B|{\mathcal{F}}))\\ =&\exp(i\phi(x_0+rT)){\mathbb E}\left[\exp\left(-(1+i\phi)\frac12\int_0^TV(t)dt +\right.\right.\\ \\ &\left.\left.+(1+i\phi)\frac{\rho}{\sigma} \left(V_T-V_0-\kappa\theta T+\kappa\int_0^TV(t)dt\right)+\frac12(1+i\phi)^2(1-\rho^2)\int_0^T V(t)dt\right)\right]\\ \\ =&\exp[(i\phi(x_0+rT))-(s_{21}(V_0+\kappa\theta T))]{\mathbb E}\left[\exp\left(-s_{11}\int_0^TV(t)dt+s_{21}V_T\right)\right] \end{array}

    where

    \displaystyle  \begin{array}{l} s_{11}=-(1+i\phi)\left(-\displaystyle\frac12+\frac{\rho\kappa}{\sigma}+\frac12(1-\rho^2)\right),\\ s_{21}=(1+i\phi)\displaystyle \frac{\rho}{\sigma} \end{array}

    Finally, let

    \displaystyle  y(V_0,T)={\mathbb E}\left[\exp\left(-s_{11}\int_0^TV(t)dt+s_{21}V_T\right)\right]

    According to the Feynman-Kac theorem, the expectation {y(V_0,T)} fulfills the following PDE

    \displaystyle  \frac{\partial y}{\partial T}=-s_{11}Vy+\kappa(\theta-V)\frac{\partial y}{\partial V}+\frac12\sigma^2V\frac{\partial^2y}{\partial V^2}.

    Posted in financialMath | Tagged , | Leave a comment

    Hull-White 1F

    The SDE of Hull-White one factor model is

    \displaystyle  dr_t=(\theta(t)-ar_t)dt+\sigma dW_t.

    The equation can be decomposed into two equations {r_t =x_t+\alpha_t}, where

    \displaystyle  \left\{ \begin{array}{l} dx_t = -ax_tdt+\sigma(t) dW_t,\; x_0=0\\ d\alpha_t = (\theta(t)-a\alpha_t) dt,\; \alpha_0 =r_0\\ \end{array} \right.

    Although the second equation above is an ODE, we do not need to solve it in practice. For the first one, we have

    \displaystyle  x(t)=x(s)e^{-a(t-s)}+\int_s^t e^{-a(t-u)}\sigma(u) dW_u \ \ \ \ \ (1)

    1. Zero Bonds in Hull-White 1F

    We now evaluate the following expectation directly.

    \displaystyle  P(t,T)={\mathbb E}\left[\left.e^{-\int_t^T r_u du}\right|{\mathcal{F}}_t\right]=e^{-\int_t^T \alpha_u du}{\mathbb E}\left[\left.e^{-\int_t^T x_u du}\right|{\mathcal{F}}_t\right] \ \ \ \ \ (2)

    First, from (1) we get

    \displaystyle  \int_t^T x(u)du =x_t\int_t^T e^{-a(u-t)}du +\int_t^T\int_t^u\sigma(s)e^{-a(u-s)} dW_sdu.

    Changing the order of integration on the last term yields

    \displaystyle  \begin{array}{l} \int_t^T\int_t^u\sigma(s)e^{-a(u-s)}dW_sdu=\int_t^T\sigma(s)e^{as}dW_s\left(\int_s^Te^{-au}du\right)\\ \\ \qquad =\int_t^T\sigma(s)e^{as}\frac1a\left(e^{-as}-e^{-aT}\right)dW_s =\int_t^T\sigma(s)\frac{1-e^{-a(T-s)}}{a}dW_s \end{array}

    Thus

    \displaystyle  \int_t^T x(u)du=\frac{1-e^{-a(T-t)}}{a}x(t)+\int_t^T \frac{1-e^{-a(T-u)}}{a}\sigma dW_u.

    (If {a=0}, we simply have {\int_t^T x(u)du=(T-t)x_t+\int_t^T(T-u)\sigma dW_u}.)

    Let {\displaystyle B(t,T)=\frac{1-e^{-a(T-t)}}{a}} and the zero bond price (2) becomes

    \displaystyle  P(t,T)=\exp\left(-\int_t^T\alpha(u)du-B(t,T)x(t) + \frac12\int_t^T B(u,T)^2\sigma^2 du\right) \ \ \ \ \ (3)

    It implies that

    \displaystyle  \frac{P(0,T)}{P(0,t)}=\exp\left(-\int_t^T\alpha(u)du + \frac12\int_0^T B(u,T)^2\sigma^2 du-\frac12\int_0^t B(u,t)^2\sigma^2du\right)

    We then replace the integral {\displaystyle\int_t^T\alpha(u)du} in (3) and obtain
    Thm 1 The price of zero bond in Hull-White 1F is

    \displaystyle  P(t,T) = \frac{P(0,T)}{P(0,t)}\exp\left(-B(t,T)x(t) + \frac12\int_0^t \left(B(u,t)^2- B(u,T)^2\right)\sigma^2 du\right)

    Rem 1 (ABr Form) Let {f(t,T)} be the instantaneous forward rate, i.e., {\displaystyle f(t,T)=-\frac{\partial\ln P(t,T)}{\partial T}}. It follows from (3) that

    \displaystyle  f(0,t)= \alpha(t)-\frac12 B(0,t)^2\sigma^2.

    Then the zero bond price as a function of short rate {r} takes form

    \displaystyle  P(t,T)=A(t,T)e^{-B(t,T)r_t}, \ \ \ \ \ (4)

    where

    \displaystyle  \begin{array}{l} A(t,T) =\frac{P(0,T)}{P(0,t)}\exp\left(B(t,T)\alpha(t)+\frac12\int_0^t \left(B(u,t)^2- B(u,T)^2\right)\sigma^2 du\right)\\ \\ = \frac{P(0,T)}{P(0,t)}\exp\left(B(t,T)f(0,t)+\frac12B(t,T)B(0,t)^2\sigma^2+\frac12\int_0^t \left(B(u,t)^2- B(u,T)^2\right)\sigma^2 du\right) \end{array}

    In particular, when {\sigma} is a constant we have

    \displaystyle  A(t,T)= \exp\left(B(t,T)f(0,t)-\frac{\sigma^2}{4a}(1-e^{-2at})B(t,T)^2\right).

    2. Zero Bond Options in Hull-White 1F

    We now consider European call options on zero bonds.

    \displaystyle  \begin{array}{l} {\mathbf{ZBC}}(t,T,\tau,K)={\mathbb E}\left[\left.e^{-\int_t^T r_u du}(P(T,\tau)-K)^{+}\right|{\mathcal{F}}_t\right]\nonumber \\ \\ \qquad\quad = P(t,T){\mathbb E}^T\left[\left.(P(T,\tau)-K)^{+}\right|{\mathcal{F}}_t\right] \end{array}

    As the process of zero bond {P(T,\tau)} is a deterministic function of the fundamental stochastic process of short rate {r(T)}, we need know the distribution of the process {r} under the {T}-forward measure in order to evaluate the latter expectation above. First, differentiating (4) yields the following SDE under the risk-neutral measure

    \displaystyle  \frac{dP(t,T)}{P(t,T)}=(\cdots)dt-B(t,T)\sigma dW_t.

    Let {B(t)} be the bank account. Then the risk-neutral measure and {T}-forward measure are linked through the process

    \displaystyle  \xi_t=\frac{P(t,T)}{P(0,T)}\frac{B(0)}{B(t)}=\frac{P(t,T)}{P(0,T)B(t)}

    By quotient role we get

    \displaystyle  d\xi_t=(\cdots)dt+\frac{1}{P(0,T)}\frac{-P(t,T)B(t,T)\sigma dW_t}{B(t)}

    Because {\xi_t} is a martingale under risk-neutral measure, we must have

    \displaystyle  d\xi_t=-B(t,T)\sigma\xi_tdW_t.

    Furthermore, Girsanov’s theorem says

    \displaystyle  dW_t^T=dW_t+B(t,T)\sigma dt,

    is a Brownian motion under {T}-forward measure. The SDE for the process {x_t} now becomes

    \displaystyle  dx_t = [-B(t,T)\sigma(t)-ax_t]dt+\sigma(t) dW_t^T.

    It follows that {x_t} conditional on {{\mathcal{F}}_s} is, under the {T}-forward measure, still Gaussian with variance given by

    \displaystyle  {\mathbf{Var}}^T[\left.x_t\right|{\mathcal{F}}_s]=\int_s^t e^{-2a(t-u)}\sigma^2du.

    To find the distribution of zero bond {P(T,\tau)} under {T}-forward measure, we repeat the calculation we did in preceding section and get a similar expression to (3). Then,

    \displaystyle  \begin{array}{l} {\mathbb E}^T\left[\left.(P(T,\tau)-K)^{+}\right|{\mathcal{F}}_t\right]=\int_{-\infty}^{+\infty}\frac{1}{\sqrt{2\pi}V_p}\left(e^z-K\right)^{+}e^{-\frac12\frac{(z-M_p)^2}{V_p^2}}dz\\ \\ \qquad=e^{M_P+\frac12 V_p^2}\Phi\left(\frac{M_p-\ln K+V_p^2}{V_p}\right)-K\Phi\left(\frac{M_p-\ln K}{V_p}\right), \end{array}

    where

    \displaystyle  V_p^2=B(T,\tau)^2{\mathbf{Var}}^T[\left.x_T\right|{\mathcal{F}}_t]=B(T,\tau)^2\int_t^T e^{-2a(T-u)}\sigma^2du.

    The last step is to calculate the mean {M_p}. Note that

    \displaystyle  \frac{P(t,\tau)}{P(t,T)}={\mathbb E}^T[P(T,\tau)|{\mathcal{F}}_t]=e^{M_p+\frac12 V_p^2}

    Thus,

    \displaystyle  M_p=\ln\frac{P(t,\tau)}{P(t,T)}-\frac12 V_p^2

    Finally, we obtain

    \displaystyle  \begin{array}{l} {\mathbf{ZBC}}(t,T,\tau,K)=P(t,T){\mathbb E}^T\left[\left.(P(T,\tau)-K)^{+}\right|{\mathcal{F}}_t\right]\\ \\ \quad =P(t,\tau)\Phi\left(\frac{\ln\frac{P(t,\tau)}{P(t,T)K}+\frac12 V_p^2}{V_p}\right)-P(t,T)K\Phi\left(\frac{\ln\frac{P(t,\tau)}{P(t,T)K}-\frac12 V_p^2}{V_p}\right) \end{array}

    Posted in financialMath | Tagged , | Leave a comment