An animal-actuated rotational head-fixation system for 2-photon imaging during 2-d navigation, bioRxiv, 2018-03-02

AbstractUnderstanding how the biology of the brain gives rise to the computations that drive behavior requires high fidelity, large scale, and subcellular measurements of neural activity. 2-photon microscopy is the primary tool that satisfies these requirements, particularly for measurements during behavior. However, this technique requires rigid head-fixation, constraining the behavioral repertoire of experimental subjects. Increasingly, complex task paradigms are being used to investigate the neural substrates of complex behaviors, including navigation of complex environments, resolving uncertainty between multiple outcomes, integrating unreliable information over time, andor building internal models of the world. In rodents, planning and decision making processes are often expressed via head and body motion. This produces a significant limitation for head-fixed two-photon imaging. We therefore developed a system that overcomes a major problem of head-fixation the lack of rotational vestibular input. The system measures rotational strain exerted by mice on the head restraint, which consequently drives a motor, rotating the constraint system and dissipating the strain. This permits mice to rotate their heads in the azimuthal plane with negligible inertia and friction. This stable rotating head-fixation system allows mice to explore physical or virtual 2-D environments. To demonstrate the performance of our system, we conducted 2-photon GCaMP6f imaging in somas and dendrites of pyramidal neurons in mouse retrosplenial cortex. We show that the subcellular resolution of the system’s 2-photon imaging is comparable to that of conventional head-fixed experiments. Additionally, this system allows the attachment of heavy instrumentation to the animal, making it possible to extend the approach to large-scale electrophysiology experiments in the future. Our method enables the use of state-of-the-art imaging techniques while animals perform more complex and naturalistic behaviors than currently possible, with broad potential applications in systems neuroscience.

biorxiv neuroscience 200-500-users 2018

fastp an ultra-fast all-in-one FASTQ preprocessor, bioRxiv, 2018-03-02

AbstractMotivationQuality control and preprocessing of FASTQ files are essential to providing clean data for downstream analysis. Traditionally, a different tool is used for each operation, such as quality control, adapter trimming, and quality filtering. These tools are often insufficiently fast as most are developed using high-level programming languages (e.g., Python and Java) and provide limited multi-threading support. Reading and loading data multiple times also renders preprocessing slow and IO inefficient.ResultsWe developed fastp as an ultra-fast FASTQ preprocessor with useful quality control and data-filtering features. It can perform quality control, adapter trimming, quality filtering, per-read quality cutting, and many other operations with a single scan of the FASTQ data. It also supports unique molecular identifier preprocessing, poly tail trimming, output splitting, and base correction for paired-end data. It can automatically detect adapters for single-end and paired-end FASTQ data. This tool is developed in C++ and has multi-threading support. Based on our evaluation, fastp is 2–5 times faster than other FASTQ preprocessing tools such as Trimmomatic or Cutadapt despite performing far more operations than similar tools.Availability and ImplementationThe open-source code and corresponding instructions are available at <jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpsgithub.comOpenGenefastp>httpsgithub.comOpenGenefastp<jatsext-link>Contactchen@haplox.com

biorxiv bioinformatics 100-200-users 2018

Virtual ChIP-seq predicting transcription factor binding by learning from the transcriptome, bioRxiv, 2018-03-01

AbstractMotivationIdentifying transcription factor binding sites is the first step in pinpointing non-coding mutations that disrupt the regulatory function of transcription factors and promote disease. ChIP-seq is the most common method for identifying binding sites, but performing it on patient samples is hampered by the amount of available biological material and the cost of the experiment. Existing methods for computational prediction of regulatory elements primarily predict binding in genomic regions with sequence similarity to known transcription factor sequence preferences. This has limited efficacy since most binding sites do not resemble known transcription factor sequence motifs, and many transcription factors are not even sequence-specific.ResultsWe developed Virtual ChIP-seq, which predicts binding of individual transcription factors in new cell types using an artificial neural network that integrates ChIP-seq results from other cell types and chromatin accessibility data in the new cell type. Virtual ChIP-seq also uses learned associations between gene expression and transcription factor binding at specific genomic regions. This approach outperforms methods that predict TF binding solely based on sequence preference, pre-dicting binding for 36 transcription factors (Matthews correlation coefficient &gt; 0.3).AvailabilityThe datasets we used for training and validation are available at <jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpsvirchip.hoffmanlab.org>httpsvirchip.hoffmanlab.org<jatsext-link>. We have deposited in Zenodo the current version of our software (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.1066928>httpdoi.org10.5281zenodo.1066928<jatsext-link>), datasets (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.823297>httpdoi.org10.5281zenodo.823297<jatsext-link>), predictions for 36 transcription factors on Roadmap Epigenomics cell types (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.1455759>httpdoi.org10.5281zenodo.1455759<jatsext-link>), and predictions in Cistrome as well as ENCODE-DREAM in vivo TF Binding Site Prediction Challenge (<jatsext-link xmlnsxlink=httpwww.w3.org1999xlink ext-link-type=uri xlinkhref=httpdoi.org10.5281zenodo.1209308>httpdoi.org10.5281zenodo.1209308<jatsext-link>).

biorxiv bioinformatics 200-500-users 2018

End-to-end deep image reconstruction from human brain activity, bioRxiv, 2018-02-28

AbstractDeep neural networks (DNNs) have recently been applied successfully to brain decoding and image reconstruction from functional magnetic resonance imaging (fMRI) activity. However, direct training of a DNN with fMRI data is often avoided because the size of available data is thought to be insufficient to train a complex network with numerous parameters. Instead, a pre-trained DNN has served as a proxy for hierarchical visual representations, and fMRI data were used to decode individual DNN features of a stimulus image using a simple linear model, which were then passed to a reconstruction module. Here, we present our attempt to directly train a DNN model with fMRI data and the corresponding stimulus images to build an end-to-end reconstruction model. We trained a generative adversarial network with an additional loss term defined in a high-level feature space (feature loss) using up to 6,000 training data points (natural images and the fMRI responses). The trained deep generator network was tested on an independent dataset, directly producing a reconstructed image given an fMRI pattern as the input. The reconstructions obtained from the proposed method showed resemblance with both natural and artificial test stimuli. The accuracy increased as a function of the training data size, though not outperforming the decoded feature-based method with the available data size. Ablation analyses indicated that the feature loss played a critical role to achieve accurate reconstruction. Our results suggest a potential for the end-to-end framework to learn a direct mapping between brain activity and perception given even larger datasets.

biorxiv neuroscience 200-500-users 2018

 

Created with the audiences framework by Jedidiah Carlson

Powered by Hugo