Project Meeting(2/3/2018)

Project Meeting(2/3/2018)

Topics to Discuss

Seasonal Adjustment in Supervised Learning

After performing some research on seasonal adjustment, I couldn’t figure out how to apply it in supervised learning(neural networks). Following is a discussion of seasonal adjustment approaches and some questions I have been wondering about.

Seasonality and trend in a time series represent a dependence on the value of time. A classical time series decomposition is: yt=Tt+St+Ity_t = T_t + S_t + I_t, where TtT_t is the trend, StS_t is the seasonal component and ItI_t is the irregular component. ItI_t is independent of time and is stationary. In forecasting, we are only interested in ItI_t and so it is important to remove StS_t and TtT_t. This process is called seasonal adjustment.

Seasonal Adjustment Methods

There are generally two type of approaches for seasonal adjustment:

Seasonal adjustment in supervised learning

Seasonal estimation and supervised learning

In supervised learning the data is split int training validation and test sets (validation set is sometimes optional). The training set is used to train the model and the validation and test sets are used to evaluate the model’s performance.

The point of using a test/validation set is to simulate unseen data to evaluate the model on. It is important not to “leak” any information from the test set into the training set. Furthermore, in forecasting time series, data is ordered and it is important not to leak any information from future observations in to previous ones. For example when doing one step ahead prediction on the test set, each prediction is made based on an observation (yty_t is used to predict yt+1y_{t+1}). The assumption should be that at time tt we don’t know the value of yt+1y_{t+1}. Any data preprocessing done on the whole time series (e.g data standardisation/normalisation) would “leak” information about future values of the series.

The mentioned seasonal estimation methods all use the whole time series to estimate the seasonal and trend components. This means that, when evaluating a supervised learning model, they will “leak” information about future values of the series.
Although this seems to be the case, most authors are not concerned with this issue ([1] [2] [3]) and seem to use seasonal adjustment on the whole time series prior to training and evaluating their supervised learning. This tutorial even suggests applying standardisation to the time series.

Can the effects of “leaked” information from future values really be ignored? Are there any effects at all or is my reasoning wrong?

Difference and supervised learning

Differencing the time series until it becomes stationary does not suffer from future information leaking. However it is not a reversible operation. Which means, there is no way to compare results from a supervised learning model trained on raw values and differenced values. The predictions made by models trained on differenced data would be in a completely different scale than the ones performed on raw data. How could forecasts produced by such a model be any useful without a way to reverse the transformation?

[1]
G. P. Zhang and M. Qi, “Neural network forecasting for seasonal and trend time series,” European Journal of Operational Research, vol. 160, no. 2, pp. 501–514, Jan. 2005.

[2]
D. S. Salazar, P. J. Adeodato, and A. L. Arnaud, “Data transformations and seasonality adjustments improve forecasts of MLP ensembles,” in Evolving and Adaptive Intelligent Systems (EAIS), 2012 IEEE Conference on, 2012, pp. 139–144.

[3]
M. Qi and G. P. Zhang, “Trend time series modeling and forecasting with neural networks,” in 2003 IEEE International Conference on Computational Intelligence for Financial Engineering, 2003. Proceedings., 2003, pp. 331–337.

Comments