Could a Neuroscientist Understand a Microprocessor?, bioRxiv, 2016-05-27

AbstractThere is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.Author SummaryNeuroscience is held back by the fact that it is hard to evaluate if a conclusion is correct; the complexity of the systems under study and their experimental inaccessability make the assessment of algorithmic and data analytic technqiues challenging at best. We thus argue for testing approaches using known artifacts, where the correct interpretation is known. Here we present a microprocessor platform as one such test case. We find that many approaches in neuroscience, when used na•vely, fall short of producing a meaningful understanding.

biorxiv neuroscience 500+-users 2016

Best Practices in Data Analysis and Sharing in Neuroimaging using MRI, bioRxiv, 2016-05-23

AbstractNeuroimaging enables rich noninvasive measurements of human brain activity, but translating such data into neuroscientific insights and clinical applications requires complex analyses and collaboration among a diverse array of researchers. The open science movement is reshaping scientific culture and addressing the challenges of transparency and reproducibility of research. To advance open science in neuroimaging the Organization for Human Brain Mapping created the Committee on Best Practice in Data Analysis and Sharing (COBIDAS), charged with creating a report that collects best practice recommendations from experts and the entire brain imaging community. The purpose of this work is to elaborate the principles of open and reproducible research for neuroimaging using Magnetic Resonance Imaging (MRI), and then distill these principles to specific research practices. Many elements of a study are so varied that practice cannot be prescribed, but for these areas we detail the information that must be reported to fully understand and potentially replicate a study. For other elements of a study, like statistical modelling where specific poor practices can be identified, and the emerging areas of data sharing and reproducibility, we detail both good practice and reporting standards. For each of seven areas of a study we provide tabular listing of over 100 items to help plan, execute, report and share research in the most transparent fashion. Whether for individual scientists, or for editors and reviewers, we hope these guidelines serve as a benchmark, to raise the standards of practice and reporting in neuroimaging using MRI.

biorxiv neuroscience 100-200-users 2016

 

Created with the audiences framework by Jedidiah Carlson

Powered by Hugo