Modeling Financial Time Series

This post started as a replication of an article that used machine learning to predict financial time series. The article used python and I thought it would be fun to replicate in R. But I found a lot of problems with the analysis as I worked through the replication. I don’t want to publicly tear down another person’s work, so this post will generally focus on common errors I see when people do analysis–specifically machine learning–on time series.

Financial time series are notoriously noisy, with a small signal-to-noise ratio. This makes it harder to evaluate whether your machine learning model is fitting the noise or the signal. Therefore, machine learning models on financial time series often do not do as well as simpler models. Deep Learning with R by Francois Chollet and JJ Allaire

I’d like to mention Deep Learning in R by Francois Chollet and JJ Allaire because it does a great job emphasizing that you need to evaluate your super-cool machine learning model against simpler models.

In addition to performing better, it’s easier to explain how changes in the inputs change the predictions of simpler models. Machine learning models are more like a black box. It’s very hard to explain how changes in the inputs change the predictions they make.

Time series data usually has several characteristics that can bias model predictions, including autocorrelation, non-stationarity, seasonality, time-varying and clustered volatility. These characteristics in the data will cause problems for simple models and machine learning models. So you need to account for them to avoid biasing your predictions.

Always plot your data before estimating any model. Sometimes it’s easier to spot issues with your data visually. At minimum, plots can give you some indication of what type(s) of statistical tests you should perform.

Last, but not least, you have to be very careful to avoid look-ahead bias in your data when building time series models. Look-ahead bias is when you use data you would not have at the time you need to make a prediction. We’ll cover some very subtle look-ahead issues in this post.

Let’s get started.

Import Data

I’m going to use the quantmod package to pull data for the Dow Jones Industrial Average index from Yahoo Finance, and the xts package for data manipulation.

library(quantmod)

#dji <- getSymbols("^DJI", auto.assign = FALSE)
#saveRDS(dji, file = "dji.rds")
dji <- readRDS("dji.rds")

# plot a candlestick chart
chart_Series(dji)

Plot Returns

# extract close prices
close_prices <- Cl(dji)
# calculate returns
returns <- na.omit(ROC(close_prices))

#plot(close_prices)
#plot(returns)

Autocorrelation

Autocorrelation (also called serial correlation) occurs when observations in your series are not independently distributed. That is, the current value is a function of one or more prior values. This is a problem because most models assume that observations are iid (independently and identically distributed).

You can visually evaluate your series for autocorrelation using the autocorrelation correlation function (acf) plot. This will show an estimate of how the current observation is affected by a observations that occur at prior points in time. Significant autocorrelations mean the series has a moving average.

acf(close_prices)
acf(returns)

Describe the charts

You can also inspect the partial autocorrelation (pacf) function. The pacf at lag ‘t-n’ accounts for autocorrelations at all lags shorter than ‘t-n’ by including their residuals (differences between the actual value and the estimated value) in the model. The acf only includes the lagged values in the model, not their residuals. Significant lags in the pacf mean the series has an auto-regressive component.

pacf(close_prices)
pacf(returns)

Describe the charts

There are a few statistical tests that are useful for evaluating whether there are autocorrelation(s) in your data.

You can use the Durbin-Watson test to determine whether the series has a significant 1-period lag. You can run a Durbin-Watson test using the foo() function in R.

The Breusch Godfrey test You can run a Breusch Godfrey test using the foo() function in R.

https://en.wikipedia.org/wiki/Ljung-Box_test You can run a Ljune-Box test using the foo() function in R.

Non-stationary

means ‘unit-root’ (trend), or that the mean and/or variance change over time.

# mean is obviously changing over time
plot(close_prices, subset="/2019")
addSeries(runSD(close_prices, 120), on = NA)

tseries::adf.test(close_prices)
## 
##  Augmented Dickey-Fuller Test
## 
## data:  close_prices
## Dickey-Fuller = -2.8677, Lag order = 15, p-value = 0.2109
## alternative hypothesis: stationary

# mean is stationary, but variance is not
# ADF test doesn't catch the changing variance -> low statistical power
plot(returns, subset="/2019")
addSeries(runSD(returns, 120), on = NA)

tseries::adf.test(returns)
## Warning in tseries::adf.test(returns): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  returns
## Dickey-Fuller = -14.58, Lag order = 15, p-value = 0.01
## alternative hypothesis: stationary

Augmented Dickey-Fuller

This test has low statistical power, meaning it often cannot distinguish between true unit-root processes and near-unit-root processes.


If you love using my open-source work (e.g. quantmod, TTR, IBrokers, microbenchmark etc.), you can give back by sponsoring me on GitHub. I truly appreciate anything you’re willing and able to give!