Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

On the Asymptotic Properties of SLOPE

Kos, Michał and Bogdan, Małgorzata LU (2020) In Sankhya A 82(2). p.499-532
Abstract

Sorted L-One Penalized Estimator (SLOPE) is a relatively new convex optimization procedure for selecting predictors in high dimensional regression analyses. SLOPE extends LASSO by replacing the L1 penalty norm with a Sorted L1 norm, based on the non-increasing sequence of tuning parameters. This allows SLOPE to adapt to unknown sparsity and achieve an asymptotic minimax convergency rate under a wide range of high dimensional generalized linear models. Additionally, in the case when the design matrix is orthogonal, SLOPE with the sequence of tuning parameters λBH corresponding to the sequence of decaying thresholds for the Benjamini-Hochberg multiple testing correction provably controls the False... (More)

Sorted L-One Penalized Estimator (SLOPE) is a relatively new convex optimization procedure for selecting predictors in high dimensional regression analyses. SLOPE extends LASSO by replacing the L1 penalty norm with a Sorted L1 norm, based on the non-increasing sequence of tuning parameters. This allows SLOPE to adapt to unknown sparsity and achieve an asymptotic minimax convergency rate under a wide range of high dimensional generalized linear models. Additionally, in the case when the design matrix is orthogonal, SLOPE with the sequence of tuning parameters λBH corresponding to the sequence of decaying thresholds for the Benjamini-Hochberg multiple testing correction provably controls the False Discovery Rate (FDR) in the multiple regression model. In this article we provide new asymptotic results on the properties of SLOPE when the elements of the design matrix are iid random variables from the Gaussian distribution. Specifically, we provide conditions under which the asymptotic FDR of SLOPE based on the sequence λBH converges to zero and the power converges to 1. We illustrate our theoretical asymptotic results with an extensive simulation study. We also provide precise formulas describing FDR of SLOPE under different loss functions, which sets the stage for future investigation on the model selection properties of SLOPE and its extensions.

(Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
62J05, Convex optimization, High dimensional regression, Model selection, Multiple testing, Primary; 62J07, Secondary 62F12
in
Sankhya A
volume
82
issue
2
pages
34 pages
publisher
Springer
external identifiers
  • scopus:85089476280
ISSN
0976-836X
DOI
10.1007/s13171-020-00212-5
language
English
LU publication?
yes
id
e8e6951d-63dd-424f-a37c-613135ca0346
date added to LUP
2020-08-27 13:43:30
date last changed
2022-04-19 00:23:39
@article{e8e6951d-63dd-424f-a37c-613135ca0346,
  abstract     = {{<p>Sorted L-One Penalized Estimator (SLOPE) is a relatively new convex optimization procedure for selecting predictors in high dimensional regression analyses. SLOPE extends LASSO by replacing the L<sub>1</sub> penalty norm with a Sorted L<sub>1</sub> norm, based on the non-increasing sequence of tuning parameters. This allows SLOPE to adapt to unknown sparsity and achieve an asymptotic minimax convergency rate under a wide range of high dimensional generalized linear models. Additionally, in the case when the design matrix is orthogonal, SLOPE with the sequence of tuning parameters λ<sup>BH</sup> corresponding to the sequence of decaying thresholds for the Benjamini-Hochberg multiple testing correction provably controls the False Discovery Rate (FDR) in the multiple regression model. In this article we provide new asymptotic results on the properties of SLOPE when the elements of the design matrix are iid random variables from the Gaussian distribution. Specifically, we provide conditions under which the asymptotic FDR of SLOPE based on the sequence λ<sup>BH</sup> converges to zero and the power converges to 1. We illustrate our theoretical asymptotic results with an extensive simulation study. We also provide precise formulas describing FDR of SLOPE under different loss functions, which sets the stage for future investigation on the model selection properties of SLOPE and its extensions.</p>}},
  author       = {{Kos, Michał and Bogdan, Małgorzata}},
  issn         = {{0976-836X}},
  keywords     = {{62J05; Convex optimization; High dimensional regression; Model selection; Multiple testing; Primary; 62J07; Secondary 62F12}},
  language     = {{eng}},
  number       = {{2}},
  pages        = {{499--532}},
  publisher    = {{Springer}},
  series       = {{Sankhya A}},
  title        = {{On the Asymptotic Properties of SLOPE}},
  url          = {{http://dx.doi.org/10.1007/s13171-020-00212-5}},
  doi          = {{10.1007/s13171-020-00212-5}},
  volume       = {{82}},
  year         = {{2020}},
}