Virtual ChIP-seq predicting transcription factor binding by learning from the transcriptome, bioRxiv, 2018-03-01

AbstractMotivationIdentifying transcription factor binding sites is the first step in pinpointing non-coding mutations that disrupt the regulatory function of transcription factors and promote disease. ChIP-seq is the most common method for identifying binding sites, but performing it on patient samples is hampered by the amount of available biological material and the cost of the experiment. Existing methods for computational prediction of regulatory elements primarily predict binding in genomic regions with sequence similarity to known transcription factor sequence preferences. This has limited efficacy since most binding sites do not resemble known transcription factor sequence motifs, and many transcription factors are not even sequence-specific.ResultsWe developed Virtual ChIP-seq, which predicts binding of individual transcription factors in new cell types using an artificial neural network that integrates ChIP-seq results from other cell types and chromatin accessibility data in the new cell type. Virtual ChIP-seq also uses learned associations between gene expression and transcription factor binding at specific genomic regions. This approach outperforms methods that predict TF binding solely based on sequence preference, pre-dicting binding for 36 transcription factors (Matthews correlation coefficient &gt; 0.3).AvailabilityThe datasets we used for training and validation are available at <jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpsvirchip.hoffmanlab.org>httpsvirchip.hoffmanlab.org<jatsext-link>. We have deposited in Zenodo the current version of our software (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.1066928>httpdoi.org10.5281zenodo.1066928<jatsext-link>), datasets (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.823297>httpdoi.org10.5281zenodo.823297<jatsext-link>), predictions for 36 transcription factors on Roadmap Epigenomics cell types (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.1455759>httpdoi.org10.5281zenodo.1455759<jatsext-link>), and predictions in Cistrome as well as ENCODE-DREAM in vivo TF Binding Site Prediction Challenge (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.1209308>httpdoi.org10.5281zenodo.1209308<jatsext-link>).

biorxiv bioinformatics 200-500-users 2018

End-to-end deep image reconstruction from human brain activity, bioRxiv, 2018-02-28

AbstractDeep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient to train a complex network with numerous parameters. Instead, a pre-trained DNN has served as a proxy for hierarchical visual representations, and fMRI data were used to decode individual DNN features of a stimulus image using a simple linear model, which were then passed to a reconstruction module. Here, we present our attempt to directly train a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We trained a generative adversarial network with an additional loss term defined in a high-level feature space (feature loss) using up to 6,000 training data points (natural images and the fMRI responses). The trained deep generator network was tested on an independent dataset, directly producing a reconstructed image given an fMRI pattern as the input. The reconstructions obtained from the proposed method showed resemblance with both natural and artificial test stimuli. The accuracy increased as a function of the training data size, though not outperforming the decoded feature-based method with the available data size. Ablation analyses indicated that the feature loss played a critical role to achieve accurate reconstruction. Our results suggest a potential for the end-to-end framework to learn a direct mapping between brain activity and perception given even larger datasets.

biorxiv neuroscience 200-500-users 2018

Best Practices for Benchmarking Germline Small Variant Calls in Human Genomes, bioRxiv, 2018-02-24

AbstractAssessing accuracy of NGS variant calling is immensely facilitated by a robust benchmarking strategy and tools to carry it out in a standard way. Benchmarking variant calls requires careful attention to definitions of performance metrics, sophisticated comparison approaches, and stratification by variant type and genome context. The Global Alliance for Genomics and Health (GA4GH) Benchmarking Team has developed standardized performance metrics and tools for benchmarking germline small variant calls. This team includes representatives from sequencing technology developers, government agencies, academic bioinformatics researchers, clinical laboratories, and commercial technology and bioinformatics developers for whom benchmarking variant calls is essential to their work. Benchmarking variant calls is a challenging problem for many reasons<jatslist list-type=bullet><jatslist-item>Evaluating variant calls requires complex matching algorithms and standardized counting because the same variant may be represented differently in truth and query callsets.<jatslist-item><jatslist-item>Defining and interpreting resulting metrics such as precision (aka positive predictive value = TP(TP+FP)) and recall (aka sensitivity = TP(TP+FN)) requires standardization to draw robust conclusions about comparative performance for different variant calling methods.<jatslist-item><jatslist-item>Performance of NGS methods can vary depending on variant types and genome context; and as a result understanding performance requires meaningful stratification.<jatslist-item><jatslist-item>High-confidence variant calls and regions that can be used as “truth” to accurately identify false positives and negatives are difficult to define, and reliable calls for the most challenging regions and variants remain out of reach.<jatslist-item>We have made significant progress on standardizing comparison methods, metric definitions and reporting, as well as developing and using truth sets. Our methods are publicly available on GitHub (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpsgithub.comga4ghbenchmarking-tools>httpsgithub.comga4ghbenchmarking-tools<jatsext-link>) and in a web-based app on precisionFDA, which allow users to compare their variant calls against truth sets and to obtain a standardized report on their variant calling performance. Our methods have been piloted in the precisionFDA variant calling challenges to identify the best-in-class variant calling methods within high-confidence regions. Finally, we recommend a set of best practices for using our tools and critically evaluating the results.

biorxiv genomics 100-200-users 2018

 

Created with the audiences framework by Jedidiah Carlson

Powered by Hugo