Structure learning of Gaussian Markov random fields with false discovery rate control
(2019) In Symmetry 11(10).- Abstract
In this paper, we propose a new estimation procedure for discovering the structure of Gaussian Markov random fields (MRFs) with false discovery rate (FDR) control, making use of the sorted ℓ1-norm (SL1) regularization. A Gaussian MRF is an acyclic graph representing a multivariate Gaussian distribution, where nodes are random variables and edges represent the conditional dependence between the connected nodes. Since it is possible to learn the edge structure of Gaussian MRFs directly from data, Gaussian MRFs provide an excellent way to understand complex data by revealing the dependence structure among many inputs features, such as genes, sensors, users, documents, etc. In learning the graphical structure of Gaussian MRFs, it... (More)
In this paper, we propose a new estimation procedure for discovering the structure of Gaussian Markov random fields (MRFs) with false discovery rate (FDR) control, making use of the sorted ℓ1-norm (SL1) regularization. A Gaussian MRF is an acyclic graph representing a multivariate Gaussian distribution, where nodes are random variables and edges represent the conditional dependence between the connected nodes. Since it is possible to learn the edge structure of Gaussian MRFs directly from data, Gaussian MRFs provide an excellent way to understand complex data by revealing the dependence structure among many inputs features, such as genes, sensors, users, documents, etc. In learning the graphical structure of Gaussian MRFs, it is desired to discover the actual edges of the underlying but unknown probabilistic graphical model-it becomes more complicated when the number of random variables (features) p increases, compared to the number of data points n. In particular, when p ≥ n, it is statistically unavoidable for any estimation procedure to include false edges. Therefore, there have been many trials to reduce the false detection of edges, in particular, using different types of regularization on the learning parameters. Our method makes use of the SL1 regularization, introduced recently for model selection in linear regression. We focus on the benefit of SL1 regularization that it can be used to control the FDR of detecting important random variables. Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges.
(Less)
- author
- Lee, Sangkyun ; Sobczyk, Piotr and Bogdan, Malgorzata LU
- publishing date
- 2019-10-01
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- FDR control, Gaussian Markov random field, Inverse Covariance Matrix Estimation
- in
- Symmetry
- volume
- 11
- issue
- 10
- article number
- 1311
- publisher
- MDPI AG
- external identifiers
-
- scopus:85074264260
- ISSN
- 2073-8994
- DOI
- 10.3390/sym11101311
- language
- English
- LU publication?
- no
- additional info
- Publisher Copyright: © 2019 by the authors.
- id
- fbb07318-d2bc-4b5b-b524-93c9834fda48
- date added to LUP
- 2023-12-08 09:20:47
- date last changed
- 2023-12-08 16:39:35
@article{fbb07318-d2bc-4b5b-b524-93c9834fda48, abstract = {{<p>In this paper, we propose a new estimation procedure for discovering the structure of Gaussian Markov random fields (MRFs) with false discovery rate (FDR) control, making use of the sorted ℓ<sub>1</sub>-norm (SL1) regularization. A Gaussian MRF is an acyclic graph representing a multivariate Gaussian distribution, where nodes are random variables and edges represent the conditional dependence between the connected nodes. Since it is possible to learn the edge structure of Gaussian MRFs directly from data, Gaussian MRFs provide an excellent way to understand complex data by revealing the dependence structure among many inputs features, such as genes, sensors, users, documents, etc. In learning the graphical structure of Gaussian MRFs, it is desired to discover the actual edges of the underlying but unknown probabilistic graphical model-it becomes more complicated when the number of random variables (features) p increases, compared to the number of data points n. In particular, when p ≥ n, it is statistically unavoidable for any estimation procedure to include false edges. Therefore, there have been many trials to reduce the false detection of edges, in particular, using different types of regularization on the learning parameters. Our method makes use of the SL1 regularization, introduced recently for model selection in linear regression. We focus on the benefit of SL1 regularization that it can be used to control the FDR of detecting important random variables. Adapting SL1 for probabilistic graphical models, we show that SL1 can be used for the structure learning of Gaussian MRFs using our suggested procedure nsSLOPE (neighborhood selection Sorted L-One Penalized Estimation), controlling the FDR of detecting edges.</p>}}, author = {{Lee, Sangkyun and Sobczyk, Piotr and Bogdan, Malgorzata}}, issn = {{2073-8994}}, keywords = {{FDR control; Gaussian Markov random field; Inverse Covariance Matrix Estimation}}, language = {{eng}}, month = {{10}}, number = {{10}}, publisher = {{MDPI AG}}, series = {{Symmetry}}, title = {{Structure learning of Gaussian Markov random fields with false discovery rate control}}, url = {{http://dx.doi.org/10.3390/sym11101311}}, doi = {{10.3390/sym11101311}}, volume = {{11}}, year = {{2019}}, }