Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Strong Screening Rule For SLOPE

Larsson, Johan LU orcid ; Bogdan, Malgorzata LU and Wallin, Jonas LU (2020) Neural Information Processing Systems In Advances in Neural Information Processing Systems p.1-12
Abstract
Extracting relevant features from data sets where the number of observations n is much smaller then the number of predictors p is a major challenge in modern statistics. Sorted L-One Penalized Estimation (SLOPE)—a generalization of the lasso---is a promising method within this setting. Current numerical procedures for SLOPE, however, lack the efficiency that respective tools for the lasso enjoy, particularly in the context of estimating a complete regularization path. A key component in the efficiency of the lasso is predictor screening rules: rules that allow predictors to be discarded before estimating the model. This is the first paper to establish such a rule for SLOPE. We develop a screening rule for SLOPE by examining its... (More)
Extracting relevant features from data sets where the number of observations n is much smaller then the number of predictors p is a major challenge in modern statistics. Sorted L-One Penalized Estimation (SLOPE)—a generalization of the lasso---is a promising method within this setting. Current numerical procedures for SLOPE, however, lack the efficiency that respective tools for the lasso enjoy, particularly in the context of estimating a complete regularization path. A key component in the efficiency of the lasso is predictor screening rules: rules that allow predictors to be discarded before estimating the model. This is the first paper to establish such a rule for SLOPE. We develop a screening rule for SLOPE by examining its subdifferential and show that this rule is a generalization of the strong rule for the lasso. Our rule is heuristic, which means that it may discard predictors erroneously. In our paper, however, we show that such situations are rare and easily safeguarded against by a simple check of the optimality conditions. Our numerical experiments show that the rule performs well in practice, leading to improvements by orders of magnitude for data in the p >> n domain, as well as incurring no additional computational overhead when n > p. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
screening rules, lasso, regression, regularization
in
Advances in Neural Information Processing Systems
pages
12 pages
publisher
Morgan Kaufmann Publishers
conference name
Neural Information Processing Systems
conference dates
0001-01-02
external identifiers
  • scopus:85108108776
ISSN
1049-5258
project
Optimization and Algorithms in Sparse Regression: Screening Rules, Coordinate Descent, and Normalization
language
English
LU publication?
yes
id
11f67d79-fc71-4448-9d5e-69e4edfef896
alternative location
https://papers.nips.cc/paper/2020/file/a7d8ae4569120b5bec12e7b6e9648b86-Paper.pdf
date added to LUP
2021-05-03 11:34:59
date last changed
2024-05-18 09:10:45
@article{11f67d79-fc71-4448-9d5e-69e4edfef896,
  abstract     = {{Extracting relevant features from data sets where the number of observations n is much smaller then the number of predictors p is a major challenge in modern statistics. Sorted L-One Penalized Estimation (SLOPE)—a generalization of the lasso---is a promising method within this setting. Current numerical procedures for SLOPE, however, lack the efficiency that respective tools for the lasso enjoy, particularly in the context of estimating a complete regularization path. A key component in the efficiency of the lasso is predictor screening rules: rules that allow  predictors to be discarded before estimating the model. This is the first paper to establish such a rule for SLOPE. We develop a screening rule for SLOPE by examining its subdifferential and show that this rule is a generalization of the strong rule for the lasso. Our rule is heuristic, which means that it may discard predictors erroneously. In our paper, however, we show that such situations are rare and easily safeguarded against by a simple check of the optimality conditions. Our numerical experiments show that the rule performs well in practice, leading to improvements by orders of magnitude for data in the p >> n domain, as well as incurring no additional computational overhead when n > p.}},
  author       = {{Larsson, Johan and Bogdan, Malgorzata and Wallin, Jonas}},
  issn         = {{1049-5258}},
  keywords     = {{screening rules; lasso; regression; regularization}},
  language     = {{eng}},
  pages        = {{1--12}},
  publisher    = {{Morgan Kaufmann Publishers}},
  series       = {{Advances in Neural Information Processing Systems}},
  title        = {{The Strong Screening Rule For SLOPE}},
  url          = {{https://papers.nips.cc/paper/2020/file/a7d8ae4569120b5bec12e7b6e9648b86-Paper.pdf}},
  year         = {{2020}},
}