### the presentation

```MFE 8827 PROJECT
WITH THE LOGISTIC MIXTURE
AUTOREGRESSIVE MODEL
By:
Ivan Xiao
Teo Dengjie
Kenneth Tan
CO-INTEGRATION

Components of the vector time series Xt are said to be co-integrated of order d,b (d ≥ b), denoted Xt ~
CI(d,b), if (i) all components of Xt are I(d); and (ii) a vector α exists so that X’t α = zt ~ I(d-b), b > 0. The
vector α is called the co-integrating vector and zt is called the equilibrium error.

Financial data is typically I(1) .

Co-integration between stocks will therefore yield a long term relationship and an equilibrium error zt which
is stationary i.e. I(0)

Prior research centers around ARMA models in describing the zt time series and testing whether the largest
root of the AR polynomial being close to one.

Zt ‘s with roots extremely close to one are considered I(0) and therefore, the linear combination of Xt ‘s are
classified as mean-reverting.

PROBLEM: It is difficult to assess whether zt is a mixture of fast mean-reverting and random walks or
whether zt is a slow mean-reverting process.
SOLUTION: LOGISTIC MIXTURE AUTOREGRESSIVE
MODEL (I)

Regime switching time series model.

Allows different mean structures and volatility levels in different regimes.

Probabilities of being in different regimes change dynamically following a logistic linking
function.

Consider the following linear combination of time series variables:
X ' t     at

Where
Xt = (X1,t , X2,t , X3,t ,… , XN,t )’ is a vector of asset returns
α is the co-integration vector
α0 is the long-term mean
at is the equilibrium error, which should be stationary if the asset returns are cointegrated.
LMAR MODEL (II)


Let Ft be the -field generated by {at , at-1 , …}, F(at | Ft-1 ) the conditional cumulative
distribution function of at given Ft-1 , and (.) the cumulative distribution function of the standard
Gaussian distribution.
Then, the LMAR model is given as:
2
F ( at | Ft 1 )   pk ,t  (
k 1
ek ,t
k
)
mk
ek ,t  at  k , 0   k , iat  i, k  1,2
i 1
and
n
rt  log( p1, t | p 2, t )   0   i | at  i |
i 1

Where
pi,t is the mixing probabilities i.e. p1,t is the probability of being in regime 1
ek,t is the residual of an AR(mk) model on at
rt is the log odds
LMAR MODEL (III)

Consider 2 regimes


Stationary Regime (regime 1)
Random Walk Regime (regime 2)

ek,t follows from the above 2 regimes which are 2 separate AR processes. If Xt’ s are co-integrated,
the innovations follow a white noise process with variance k , k = 1,2, since at ~ I(0)

Thus we model the spread at based on 2 regimes, hence the conditional cumulative distribution
function is a mixture of 2 Gaussian processes.

Assumption made is that the variance in each regime is constant.

The absolute spread | at i| | controls the probability of being in a regime. Rationale is that, when
the basket deviates from the mean α0 by a large amount, arbitrageurs in the market react to bring
the spread back to fair value.

Thus when the spread is larger, the basket falls back into the stationary regime by meanreverting back to the mean.

p1,t is the probability of being in regime 1 since


|at-i | large
rt large
p1,t large
The above relation is based on the assumption that δi ≥ 0 for all i. This can be achieved by
construction.
ESTIMATING THE MODEL (I)

Required parameters to estimate and optimize for the model include






Co-integrating vector α
Lag term, mk in AR(mk) model for at process in regime k, k = 1,2.
Lag term, n, in AR(n) model for rt process in LMAR model.
Volatility k for each regime.
Regression coefficients k,i for AR(mk) model for at process, i = 0…mk
Regression coefficients i for LMAR model for rt process, i = 0…n

Assume α remains fixed in consideration of transaction costs i.e. avoid frequent rebalancing.

Ordinary Least Square regression is used to estimate α. We choose 1 asset as the response
variable and regress the remaining assets in the basket against it. The regression coefficients
represent the proportion of each asset to buy / sell relative to 1 unit of the response variable.

The residuals of the OLS are then checked for stationarity. This can be done with either



Augmented Dickey Fuller Test – useful for pairs trading
Johansen Test – useful for trading more than 2 assets
From the OLS results, we can get at , which is the OLS residuals demeaned. With the knowledge
of at we can then estimate the rest of the parameters listed above.
ESTIMATING THE MODEL (II)

EM algorithm was used to estimate the rest of regression parameters.

Likelihood function math: (For N data points, assuming i.i.d),
2
F (at | Ft  1)   pk , t(
ek , t
k 1
k
)
N
I  log(  F (at | Ft  1))
t  s 1

N
 log(F (a | F
))
t 1
t
t  s 1

N
2
 (log(p
k, t
t  s 1 k 1

N
2
 (log(p
t  s 1 k 1


N
k, t
)  log(
1
2k 2

1 ek , t
exp( ( ) 2 ))
2 k
1
2
1 ek , t
)  log((k ) )  ( ) 2 )
2 k
2
2
1
ek , t 2
2
(log(
p
k
,
t
)

log(

k
)


2
2k 2
t  s 1 k 1
Where
s = max(m1, m2, n) ,
and we have dropped the logarithm of a constant in view of the maximization step that follows.
ESTIMATING THE MODEL (III)

With the likelihood function for the at process derived from F(at | Ft-1), we adopt the EM
algorithm by first assuming arbitrary indicator values, Zk,t for regime k (k = 1,2).

Then, using brute force, we estimate the parameters as listed in slide 6 for each combination of
m1 = x, m2 = y, n = z, for x,y,z ϵ [0,1,2,3]. The recursive process detailed in page 4 of the research
paper is used for estimating the parameters. Note that we used Neder Mead to estimate ξ, the
vector of coefficients for the logistic model (ξ ≡ (0, 1 …, n)’). The possible drawback to this
method is the possibility of the Neder Mead algorithm getting caught in a local maxima and thus
not converging. We ignore this issue at this point in view of possible improvement in future
analysis.

To choose the best model, we consider the Schwarz Bayesian Information Criterion (SBIC)
SBIC  2I *  log(N  max(m1, m2, n))(3  m1  m2  n)
Where
I* is the optimized log-likelihood function value.

The chosen model dimensions (m1*, m2*, n*) are associated with the smallest SBIC value. Thus we
have our LMAR model.
STRATEGY (BASIC IDEA)

Basic idea is to construct a basket that has a stationary a t, which is the deviation of the basket
from the long term mean, α0. If at is significantly greater or smaller than α0, then we open the
basket based on the co-integrating α. This is the basis of traditional co-integration based on
ARMA models.

In LMAR models though, p1,t controls the entry and exit signals for the trade. Recall that p 1,t is
the probability that at will fall into the stationary regime. It is affected by the absolute deviation
from the mean i.e. |at-i|, i = 1 … n. Thus the LMAR model considers both co-integration and
regime switching.

Thus, the beauty of the strategy lies in p 1,t which summarizes the concepts in the previous slides
into one single number.

The fact that p1,t utilizes information from lagged data is also consistent with the usual practice of
technical analysis using historical prices.
STRATEGY (STEPS)

From a basket of stocks, we have Xi,t. In the paper, cumulative returns are constructed but in our
analysis we decided to use prices of stocks instead.

For an arbitrary training period, e.g. the previous 500 trading days,





Generate co-integrating vectors, α, for all combinations of N-asset baskets. For example, there are 20C2 pairs of stocks
in 20 stocks. Note that not all combinations will be co-integrated. Sort based on highest Johansen Test Statistic.
Estimate the co-integrated LMAR parameters based on the training data.
Choose only the baskets with i ≥ 0 for all i.
Rank the baskets in ascending order of the standard error of the estimated at series. The rationale for this is that the
higher the standard error, , the higher the probability that the basket will significantly deviate from the long term
mean, hence generating a higher chance of being in a mean-reverting regime.
For the next trading day, and using the LMAR model estimated from the training period,




Monitor pi,t and open the basket according to α when pi,t is above its 95% quantile based on the estimated pi,t series in
the training period.
Close the basket position when pi,t crosses its median based on the estimated pi,t series in the training period.
Set up a maximum trading life, and if the open basket is not closed within this trading period, it is nevertheless
unwound on the last day of the trading lifes.
The decision to use quantiles and medians (over the more commonly used standard deviation and mean) lies with the
fact that the distribution of pi,t is heavily skewed.
STRATEGY(BACK-TESTABLE PARAMETERS)

The parameters for the strategy steps that can be back-tested and optimized include



The training period length.
The critical value above which pi,t must be for confirmation of being in a stationary regime.
The maximum holding period after which the open basket is force liquidated.

Due to time and resource constraints, we decided to set an arbitrary training period length and
focus on optimizing the critical value for pi,t and the maximum holding period.

We use both Sharpe Ratio and Omega performance measures in the optimization of these two
parameters.

Algoquant and SuanShu was used for implementing the strategy. Notable functions used were




Co-integration
Sharpe Ratio calculation
Omega calculation
EXAMPLE STRATEGY (I)

An example of the LMAR strategy considering US Bancorp (USB) and Wells Fargo (WFC). We
consider only pairs trading in view of the considerable obstacles we encountered during coding.
The logic can theoretically be extended to N-asset spreads at the cost of longer simulation times.

Training data set used is daily data from Jan 1, 2001 to Dec 31, 2004. This training data set was
set arbitrarily.

The optimal estimated parameters for the LMAR model based on this data sample are as follows:
Parameter
Value
n
1
m1
3
m2
1
i = [0, 1]
[5.86245, -0.465143]
1,i = [1,0, 1,1, 1,2, 1,3]
[0.006267, 0.976344, -0.084426, 0.088740]
2,i = [2,0, 2,1]
[-1.538485, 0.433115]
i = [1, 2]
[0.403832, 0.988576]
β = α1 (since pairs trade)
-0.60846
SBIC
-999.321
EXAMPLE STRATEGY (II)

Once we have estimated the LMAR model, we backtest the strategy on the same training data, relative to:



Critical value over which p1,t must exceed to generate open basket signal
Maximum holding period, T, after which the basket is liquidated if it is still open.
For the critical value, instead of the 95% quantile which is used in the paper, we decided to test for a range above
and below the 95% quantile. The range is given by
criticalvalue  (95% p(t )  50% p(t )) * 0.1* i  95% p(t )

Where


X%p(t) = the x% quantile value of pi,t series generated from the test data
i is such that i   | i [5,5]

Maximum holding period was chosen between 10 to 21 days i.e. approximately 2 to 3 weeks. This range can be
easily extended at the cost of computing efficiency.

The optimal strategy parameters are determined based on the critical value and holding period which maximize
the Sharpe Ratio or Omega of the trade.

Additional strategy parameters that can be considered for backtesting include:




The length of the training period for which to estimate the LMAR model parameters.
The frequency of re-estimating the LMAR model parameters.
An optimal position strategy, not unlike that covered in Lecture 4, depending on how far the spread deviates from the mean.
We chose not to optimize the above in view of the considerable computational complexity and simulation time
that will be added. Currently, it already takes a significant amount of time to estimate the LMAR parameters
(approximately 4 hours per pair.)
EXAMPLE STRATEGY (III)

For our example strategy, we assume an investment sum of 1000 units of response variable stock.

E.g. for our spread 1 USB vs. -0.60846 WFC, when we get a signal to buy the spread, we buy
1000 units of USB and short 608 units of WFC i.e. 1 spread = 1000 units of USB vs. 608 units of
WFC.

The assumption here is that the stocks are infinitely divisible. In certain countries (e.g. Singapore)
where the minimum trade size is 1000 shares of a company, the example strategy will have to be

Optimized strategy parameters and subsequent performance measures values are:
USB
coefficient
1

WFC
coefficient
-0.60846
Optimal holding
period
15 days
Critical
value
0.99715
PnL
\$515.28
Sharpe
Ratio
0.0296967
Omega
1.09085
Coincidentally, optimizing based on Sharpe Ratio and Omega both give the same optimal strategy
parameters.
EXAMPLE STRATEGY(IV)

The price, PnL and signal charts are:
--- Spread --- PnL --- Critical Value --- p1,t
EXAMPLE STRATEGY(V)

The performance of the strategy is not good. The Sharpe ratio is very low and the critical value
seems to high i.e. with a δ0 = 5.862, the average time that the pair is in a mean-reverting regime is
99.71% (=e5.862 / (1+e5.862)).

This result is likely not correct, and further investigation needs to be done to the selection of
stocks pairs.

We attribute the spurious results due to a lack of attention to checking whether a t generated from
the USB-WFC spread is really I(0). Blindly, we used the Cointegration function in Algoquant and
went forward from there, with no regard to the Johansen Test Statistic.

On further investiation about the USB-WFC spread, we realized it was not I(0) using the ADF test
(appropriate since it only involves 2 time-series). We apologize for this result but we are confident
the code will work if a proper co-integrated time series is input.
IMPROVEMENTS AND CONCLUSION

Future improvements on the LMAR model include





Simulation to determine the optimal test period to use.
Inclusion of transaction costs. Currently our strategy assumes that we are able to buy at the daily close price.
Modelling of σk such that volatility of the AR(p) process for at is not assumed to be constant. The authors
of the paper suggested incorporating a GARCH model for volatility estimation.
Optimization of the entry strategy to be smarter than just long or short 1 unit of the spread when a signal is
generated.
Proper testing of co-integrated conditions on baskets to ensure that at is indeed mean-reverting.

In conclusion, we have added on to the authors’ strategy by going further in backtesting for the
optimal holding period and critical value vs. assuming an arbitrary value and 95% quantile.

While the LMAR model is a definite improvement over the traditional ARMA model based cointegration, a lot more improvements to the strategy can be made to make it overall more robust
and profitable.

One thing we experienced in this project though, is that although on paper, the general concept of
co-integration is very enticing, in practise, it is difficult to identify truly co-integrated pairs,
outside of the obvious ones. We entered this exercise hoping to find meaningful relationships
amongst various different stocks that were mean-reverting, instead of the generic spot-future pair
that so often appears in textbooks.

However, even with stocks from the same industry, we were hard pressed to find any truly cointegrated pairs. Thus our poor results from the simulation.

Overall though, this project has been an awesome learning experience and we are all glad we had
a chance to work on it, even though it was pretty difficult.
APPENDIX: (INSTRUCTIONS ON RUNNING CODE)

(1) Make sure the data files are in the same format as that attached with this write-up. The format
we have chosen is the same format as that easily downloadable from Yahoo so it should be no
issue.

(2) Copy the data files to the test-data folder in Algoquant.

(3) Cut and paste the Shuanshu/examples folder attached into the algoquant folder under

algoquant-0.0.3a\algoquant\test\com\numericalmethod

(4) Create a folder named ‘Results’ in algoquant-0.0.3a\algoquant

(5) In the ‘Results’ folder created above, create another two folders named ‘Backtest’ and ‘Coint’

(6) Run LMARStrategy.java to generate LMAR model parameters. The results are contained in a
textfile named ‘USB-WSC-Params’ in the ‘Coint’ folder created above.

(7) Run LMARTrader.java to generate performance measures plots and strategy parameters. The
strategy parameters and individual trade information are contained in ‘2001-2004_USBWFC_Sharpe’ and ‘2001-2004_USB_WFC_Omega’ textfiles. The differences between the two
results depends on which performance measure is being optimized upon.

(8) The text files are comma delimited so Excel can be used to open them and inspect the results
easily.
```