Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Asymptotic bayes-optimality under sparsity of some multiple testing procedures

Bogdan, Malgorzata LU ; Chakrabarti, Arijit ; Frommlet, Florian and Ghosh, Jayanta K. (2011) In Annals of Statistics 39(3). p.1551-1579
Abstract

Within a Bayesian decision theoretic framework we investigate some asymptotic optimality properties of a large class of multiple testing rules. A parametric setup is considered, in which observations come from a normal scale mixture model and the total loss is assumed to be the sum of losses for individual tests. Our model can be used for testing point null hypotheses, as well as to distinguish large signals from a multitude of very small effects. A rule is defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme where the... (More)

Within a Bayesian decision theoretic framework we investigate some asymptotic optimality properties of a large class of multiple testing rules. A parametric setup is considered, in which observations come from a normal scale mixture model and the total loss is assumed to be the sum of losses for individual tests. Our model can be used for testing point null hypotheses, as well as to distinguish large signals from a multitude of very small effects. A rule is defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme where the proportion p of "true" alternatives converges to zero. We fully characterize the class of fixed threshold multiple testing rules which are ABOS, and hence derive conditions for the asymptotic optimality of rules controlling the Bayesian False Discovery Rate (BFDR). We finally provide conditions under which the popular Benjamini-Hochberg (BH) and Bonferroni procedures are ABOS and show that for a wide class of sparsity levels, the threshold of the former can be approximated by a nonrandom threshold. It turns out that while the choice of asymptotically optimal FDR levels for BH depends on the relative cost of a type I error, it is almost independent of the level of sparsity. Specifically, we show that when the number of tests m increases to infinity, then BH with FDR level chosen in accordance with the assumed loss function is ABOS in the entire range of sparsity parameters p ∝ m-β, with β ε (0, 1].

(Less)
Please use this url to cite or link to this publication:
author
; ; and
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Asymptotic optimality, Bayes oracle, FDR, Multiple testing
in
Annals of Statistics
volume
39
issue
3
pages
29 pages
publisher
Institute of Mathematical Statistics
external identifiers
  • scopus:84857648827
ISSN
0090-5364
DOI
10.1214/10-AOS869
language
English
LU publication?
no
id
649f2e1f-082f-47f6-905d-8eb8317981dc
date added to LUP
2023-12-08 09:29:30
date last changed
2023-12-11 13:09:52
@article{649f2e1f-082f-47f6-905d-8eb8317981dc,
  abstract     = {{<p>Within a Bayesian decision theoretic framework we investigate some asymptotic optimality properties of a large class of multiple testing rules. A parametric setup is considered, in which observations come from a normal scale mixture model and the total loss is assumed to be the sum of losses for individual tests. Our model can be used for testing point null hypotheses, as well as to distinguish large signals from a multitude of very small effects. A rule is defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme where the proportion p of "true" alternatives converges to zero. We fully characterize the class of fixed threshold multiple testing rules which are ABOS, and hence derive conditions for the asymptotic optimality of rules controlling the Bayesian False Discovery Rate (BFDR). We finally provide conditions under which the popular Benjamini-Hochberg (BH) and Bonferroni procedures are ABOS and show that for a wide class of sparsity levels, the threshold of the former can be approximated by a nonrandom threshold. It turns out that while the choice of asymptotically optimal FDR levels for BH depends on the relative cost of a type I error, it is almost independent of the level of sparsity. Specifically, we show that when the number of tests m increases to infinity, then BH with FDR level chosen in accordance with the assumed loss function is ABOS in the entire range of sparsity parameters p ∝ m-β, with β ε (0, 1].</p>}},
  author       = {{Bogdan, Malgorzata and Chakrabarti, Arijit and Frommlet, Florian and Ghosh, Jayanta K.}},
  issn         = {{0090-5364}},
  keywords     = {{Asymptotic optimality; Bayes oracle; FDR; Multiple testing}},
  language     = {{eng}},
  number       = {{3}},
  pages        = {{1551--1579}},
  publisher    = {{Institute of Mathematical Statistics}},
  series       = {{Annals of Statistics}},
  title        = {{Asymptotic bayes-optimality under sparsity of some multiple testing procedures}},
  url          = {{http://dx.doi.org/10.1214/10-AOS869}},
  doi          = {{10.1214/10-AOS869}},
  volume       = {{39}},
  year         = {{2011}},
}