Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Rank-Based Selection Strategies for Forecast Combinations: An Evaluation Study

Svensson, Magnus LU (2019) STAN40 20182
Department of Statistics
Abstract
This thesis evaluates four of the most popular methods for combining time series forecasts. One aspect that is often overlooked in the literature is the choice of which forecasts to include in a forecast combination. The focus here, is to investigate the variability in forecast accuracy that occurs between all distinct subsets from a fixed set of eleven individual forecasting models that a combination method can be fed with. Six rank-based strategies for selecting these subsets are also evaluated.

The methods are evaluated across more than 1000 monthly time series. The accuracy of one-period-ahead forecasts is analyzed. More than 66 million forecasts are evaluated. The forecasts are assessed by the Mean Absolute Scaled Error metric, and... (More)
This thesis evaluates four of the most popular methods for combining time series forecasts. One aspect that is often overlooked in the literature is the choice of which forecasts to include in a forecast combination. The focus here, is to investigate the variability in forecast accuracy that occurs between all distinct subsets from a fixed set of eleven individual forecasting models that a combination method can be fed with. Six rank-based strategies for selecting these subsets are also evaluated.

The methods are evaluated across more than 1000 monthly time series. The accuracy of one-period-ahead forecasts is analyzed. More than 66 million forecasts are evaluated. The forecasts are assessed by the Mean Absolute Scaled Error metric, and via a Model Confidence Set approach. The latter to be able to generalize the results beyond the evaluation sample.

When selecting number of forecasts to include in a combination, then it is often a matter of balancing risk and reward. The variance of the forecast accuracy between the different subsets of input forecasts is greatest when number of forecasts included is small and decreases as number of forecasts included increases. The results suggest that the mean combination method is especially fragile if poor performing subsets are selected. The three methods that uses training data handles this situation much better. If the performance of some of the input forecasts is lagging behind the rest, then it is recommended not to include these forecasts in a forecast combination.

If there exists a large dataset with similar time series comparable to the one that is being studied, then using this data together with one of the recommended selection strategies may improve the forecast accuracy of a combination method. If this is not feasible, then it is recommended to select input forecasts based on past accuracy. (Less)
Please use this url to cite or link to this publication:
author
Svensson, Magnus LU
supervisor
organization
course
STAN40 20182
year
type
H1 - Master's Degree (One Year)
subject
keywords
time series forecasting, combining forecasts, forecast combination, M3-Competition, forecast accuracy, evaluation study, model confidence set
language
English
id
8974226
date added to LUP
2019-05-06 11:36:28
date last changed
2019-05-06 11:36:28
@misc{8974226,
  abstract     = {{This thesis evaluates four of the most popular methods for combining time series forecasts. One aspect that is often overlooked in the literature is the choice of which forecasts to include in a forecast combination. The focus here, is to investigate the variability in forecast accuracy that occurs between all distinct subsets from a fixed set of eleven individual forecasting models that a combination method can be fed with. Six rank-based strategies for selecting these subsets are also evaluated.

The methods are evaluated across more than 1000 monthly time series. The accuracy of one-period-ahead forecasts is analyzed. More than 66 million forecasts are evaluated. The forecasts are assessed by the Mean Absolute Scaled Error metric, and via a Model Confidence Set approach. The latter to be able to generalize the results beyond the evaluation sample.

When selecting number of forecasts to include in a combination, then it is often a matter of balancing risk and reward. The variance of the forecast accuracy between the different subsets of input forecasts is greatest when number of forecasts included is small and decreases as number of forecasts included increases. The results suggest that the mean combination method is especially fragile if poor performing subsets are selected. The three methods that uses training data handles this situation much better. If the performance of some of the input forecasts is lagging behind the rest, then it is recommended not to include these forecasts in a forecast combination.

If there exists a large dataset with similar time series comparable to the one that is being studied, then using this data together with one of the recommended selection strategies may improve the forecast accuracy of a combination method. If this is not feasible, then it is recommended to select input forecasts based on past accuracy.}},
  author       = {{Svensson, Magnus}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{Rank-Based Selection Strategies for Forecast Combinations: An Evaluation Study}},
  year         = {{2019}},
}