Rigor and Transparency Index, a new metric of quality for assessing biological and medical science methods, bioRxiv, 2020-01-19

AbstractThe reproducibility crisis in science is a multifaceted problem involving practices and incentives, both in the laboratory and in publication. Fortunately, some of the root causes are known and can be addressed by scientists and authors alike. After careful consideration of the available literature, the National Institutes of Health identified several key problems with the way that scientists conduct and report their research and introduced guidelines to improve the rigor and reproducibility of pre-clinical studies. Many journals have implemented policies addressing these same criteria. We currently have, however, no comprehensive data on how these guidelines are impacting the reporting of research. Using SciScore, an automated tool developed to review the methods sections of manuscripts for the presence of criteria associated with the NIH and other reporting guidelines, e.g., ARRIVE, RRIDs, we have analyzed ∼1.6 million PubMed Central papers to determine the degree to which articles were addressing these criteria. The tool scores each paper on a ten point scale identifying sentences that are associated with compliance with criteria associated with increased rigor (5 pts) and those associated with key resource identification and authentication (5 pts). From these data, we have built the Rigor and Transparency Index, which is the average score for analyzed papers in a particular journal. Our analyses show that the average score over all journals has increased since 1997, but remains below five, indicating that less than half of the rigor and reproducibility criteria are routinely addressed by authors. To analyze the data further, we examined the prevalence of individual criteria across the literature, e.g., the reporting of a subject’s sex (21-37% of studies between 1997 and 2019), the inclusion of sample size calculations (2-10%), whether the study addressed blinding (3-9%), or the identifiability of key biological resources such as antibodies (11-43%), transgenic organisms (14-22%), and cell lines (33-39%). The greatest increase in prevalence for rigor criteria was seen in the use of randomization of subjects (10-30%), while software tool identifiability improved the most among key resource types (42-87%). We further analyzed individual journals over time that had implemented specific author guidelines covering rigor criteria, and found that in some journals, they had a big impact, whereas in others they did not. We speculate that unless they are enforced, author guidelines alone do little to improve the number of criteria addressed by authors. Our Rigor and Transparency Index did not correlate with the impact factors of journals.

biorxiv scientific-communication-and-education 0-100-users 2020

Insights from a survey-based analysis of the academic job market, bioRxiv, 2019-10-09

AbstractApplying for a faculty position is a critical phase of many postdoctoral careers, but most postdoctoral researchers in STEM fields enter the academic job market with little knowledge of the process and expectations. A lack of data has made it difficult for applicants to assess their qualifications relative to the general applicant pool and for institutions to develop effective hiring policies. We analyzed responses to a survey of faculty job applicants between May 2018 and May 2019. We establish various background scholarly metrics for a typical faculty applicant and present an analysis of the interplay between those metrics and hiring outcomes. Traditional benchmarks of a positive research track record above a certain threshold of qualifications were unable to completely differentiate applicants with and without offers. Our findings suggest that there is no single clear path to a faculty job offer and that metrics such as career transition awards and publications in high impact factor journals were neither necessary nor sufficient for landing a faculty position. The applicants perceived the process as unnecessarily stressful, time-consuming, and largely lacking in feedback, irrespective of a successful outcome. Our findings emphasize the need to improve the transparency of the faculty job application process. In addition, we hope these and future data will help empower trainees to enter the academic job market with clearer expectations and improved confidence.

biorxiv scientific-communication-and-education 500+-users 2019

The Future of OA A large-scale analysis projecting Open Access publication and readership, bioRxiv, 2019-10-09

Understanding the growth of open access (OA) is important for deciding funder policy, subscription allocation, and infrastructure planning. This study analyses the number of papers available as OA over time. The models includes both OA embargo data and the relative growth rates of different OA types over time, based on the OA status of 70 million journal articles published between 1950 and 2019. The study also looks at article usage data, analyzing the proportion of views to OA articles vs views to articles which are closed access. Signal processing techniques are used to model how these viewership patterns change over time. Viewership data is based on 2.8 million uses of the Unpaywall browser extension in July 2019. We found that Green, Gold, and Hybrid papers receive more views than their Closed or Bronze counterparts, particularly Green papers made available within a year of publication. We also found that the proportion of Green, Gold, and Hybrid articles is growing most quickly. In 2019- 31% of all journal articles are available as OA. - 52% of article views are to OA articles. Given existing trends, we estimate that by 2025 - 44% of all journal articles will be available as OA. - 70% of article views will be to OA articles. The declining relevance of closed access articles is likely to change the landscape of scholarly communication in the years to come.

biorxiv scientific-communication-and-education 200-500-users 2019

Comparison of bibliographic data sources Implications for the robustness of university rankings, bioRxiv, 2019-09-01

AbstractUniversities are increasingly evaluated, both internally and externally on the basis of their outputs. Often these are converted to simple, and frequently contested, rankings based on quantitative analysis of those outputs. These rankings can have substantial implications for student and staff recruitment, research income and perceived prestige of a university. Both internal and external analyses usually rely on a single data source to define the set of outputs assigned to a specific university. Although some differences between such databases are documented, few studies have explored them at the institutional scale and examined the implications of these differences for the metrics and rankings that are derived from them. We address this gap by performing detailed bibliographic comparisons between three key databases Web of Science (WoS), Scopus and, the recently relaunched Microsoft Academic (MSA). We analyse the differences between outputs with DOIs identified from each source for a sample of 155 universities and supplement this with a detailed manual analysis of the differences for fifteen universities. We find significant differences between the sources at the university level. Sources differ in the publication year of specific objects, the completeness of metadata, as well as in their coverage of disciplines, outlets, and publication type. We construct two simple rankings based on citation counts and open access status of the outputs for these universities and show dramatic changes in position based on the choice of bibliographic data sources. Those universities that experience the largest changes are frequently those from non-English speaking countries and those that are outside the top positions in international university rankings. Overall MSA has greater coverage than Scopus or WoS, but has less complete affiliation metadata. We suggest that robust evaluation measures need to consider the effect of choice of data sources and recommend an approach where data from multiple sources is integrated to provide a more robust dataset.

biorxiv scientific-communication-and-education 0-100-users 2019

The ARRIVE guidelines 2019 updated guidelines for reporting animal research, bioRxiv, 2019-07-15

AbstractReproducible science requires transparent reporting. The ARRIVE guidelines were originally developed in 2010 to improve the reporting of animal research. They consist of a checklist of information to include in publications describing in vivo experiments to enable others to scrutinise the work adequately, evaluate its methodological rigour, and reproduce the methods and results. Despite considerable levels of endorsement by funders and journals over the years, adherence to the guidelines has been inconsistent, and the anticipated improvements in the quality of reporting in animal research publications have not been achieved.Here we introduce ARRIVE 2019. The guidelines have been updated and information reorganised to facilitate their use in practice. We used a Delphi exercise to prioritise the items and split the guidelines into two sets, the ARRIVE Essential 10, which constitute the minimum requirement, and the Recommended Set, which describes the research context. This division facilitates improved reporting of animal research by supporting a stepwise approach to implementation. This helps journal editors and reviewers to verify that the most important items are being reported in manuscripts. We have also developed the accompanying Explanation and Elaboration document that serves 1) to explain the rationale behind each item in the guidelines, 2) to clarify key concepts and 3) to provide illustrative examples. We aim through these changes to help ensure that researchers, reviewers and journal editors are better equipped to improve the rigour and transparency of the scientific process and thus reproducibility.

biorxiv scientific-communication-and-education 0-100-users 2019

 

Created with the audiences framework by Jedidiah Carlson

Powered by Hugo