Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

A crowdsourced set of curated structural variants for the human genome

Chapman, Lesley M ; Spies, Noah ; Pai, Patrick ; Lim, Chun Shen ; Carroll, Andrew ; Narzisi, Giuseppe ; Watson, Christopher M ; Proukakis, Christos ; Clarke, Wayne E and Nariai, Naoki , et al. (2020) In PLoS Computational Biology 16(6).
Abstract
A high quality benchmark for small variants encompassing 88 to 90% of the reference genome has been developed for seven Genome in a Bottle (GIAB) reference samples. However a reliable benchmark for large indels and structural variants (SVs) is more challenging. In this study, we manually curated 1235 SVs, which can ultimately be used to evaluate SV callers or train machine learning models. We developed a crowdsourcing app - SVCurator - to help GIAB curators manually review large indels and SVs within the human genome, and report their genotype and size accuracy. SVCurator displays images from short, long, and linked read sequencing data from the GIAB Ashkenazi Jewish Trio son [NIST RM 8391/HG002]. We asked curators to assign labels... (More)
A high quality benchmark for small variants encompassing 88 to 90% of the reference genome has been developed for seven Genome in a Bottle (GIAB) reference samples. However a reliable benchmark for large indels and structural variants (SVs) is more challenging. In this study, we manually curated 1235 SVs, which can ultimately be used to evaluate SV callers or train machine learning models. We developed a crowdsourcing app - SVCurator - to help GIAB curators manually review large indels and SVs within the human genome, and report their genotype and size accuracy. SVCurator displays images from short, long, and linked read sequencing data from the GIAB Ashkenazi Jewish Trio son [NIST RM 8391/HG002]. We asked curators to assign labels describing SV type (deletion or insertion), size accuracy, and genotype for 1235 putative insertions and deletions sampled from different size bins between 20 and 892,149 bp. 'Expert' curators were 93% concordant with each other, and 37 of the 61 curators had at least 78% concordance with a set of 'expert' curators. The curators were least concordant for complex SVs and SVs that had inaccurate breakpoints or size predictions. After filtering events with low concordance among curators, we produced high confidence labels for 935 events. The SVCurator crowdsourced labels were 94.5% concordant with the heuristic-based draft benchmark SV callset from GIAB. We found that curators can successfully evaluate putative SVs when given evidence from multiple sequencing technologies. (Less)
Please use this url to cite or link to this publication:
@article{cdd6cd98-734f-4c7e-a60f-4fb91fe81fca,
  abstract     = {{A high quality benchmark for small variants encompassing 88 to 90% of the reference genome has been developed for seven Genome in a Bottle (GIAB) reference samples. However a reliable benchmark for large indels and structural variants (SVs) is more challenging. In this study, we manually curated 1235 SVs, which can ultimately be used to evaluate SV callers or train machine learning models. We developed a crowdsourcing app - SVCurator - to help GIAB curators manually review large indels and SVs within the human genome, and report their genotype and size accuracy. SVCurator displays images from short, long, and linked read sequencing data from the GIAB Ashkenazi Jewish Trio son [NIST RM 8391/HG002]. We asked curators to assign labels describing SV type (deletion or insertion), size accuracy, and genotype for 1235 putative insertions and deletions sampled from different size bins between 20 and 892,149 bp. 'Expert' curators were 93% concordant with each other, and 37 of the 61 curators had at least 78% concordance with a set of 'expert' curators. The curators were least concordant for complex SVs and SVs that had inaccurate breakpoints or size predictions. After filtering events with low concordance among curators, we produced high confidence labels for 935 events. The SVCurator crowdsourced labels were 94.5% concordant with the heuristic-based draft benchmark SV callset from GIAB. We found that curators can successfully evaluate putative SVs when given evidence from multiple sequencing technologies.}},
  author       = {{Chapman, Lesley M and Spies, Noah and Pai, Patrick and Lim, Chun Shen and Carroll, Andrew and Narzisi, Giuseppe and Watson, Christopher M and Proukakis, Christos and Clarke, Wayne E and Nariai, Naoki and Dawson, Eric and Jones, Garan and Blankenberg, Daniel and Brueffer, Christian and Xiao, Chunlin and Kolora, Sree Rohit Raj and Alexander, Noah and Wolujewicz, Paul and Ahmed, Azza E. and Smith, Graeme and Shehreen, Saadlee and Wenger, Aaron M and Salit, Marc and Zook, Justin M}},
  issn         = {{1553-7358}},
  keywords     = {{Bioinformatics; Computational Biology; Structural variants; Benchmark; crowd sourcing}},
  language     = {{eng}},
  month        = {{06}},
  number       = {{6}},
  publisher    = {{Public Library of Science (PLoS)}},
  series       = {{PLoS Computational Biology}},
  title        = {{A crowdsourced set of curated structural variants for the human genome}},
  url          = {{http://dx.doi.org/10.1371/journal.pcbi.1007933}},
  doi          = {{10.1371/journal.pcbi.1007933}},
  volume       = {{16}},
  year         = {{2020}},
}