Disequilibrium in Gender Ratios among Authors who Contributed Equally, bioRxiv, 2018-01-01
AbstractIn recent decades, the biomedical literature has witnessed an increasing number of authors per article together with a concomitant increase of authors claiming to have contributed equally. In this study, we analyzed over 3000 publications from 1995–2017 claiming equal contributions for authors sharing the first author position for author number, gender, and gender position. The frequency of dual pairings contributing equally was male-male > mixed gender > female-female. For mixed gender pairs males were more often at the first position although the disparity has lessened in the past decade. Among author associations claiming equal contribution and containing three or more individuals, males predominated in both the first position and number of gender exclusive groupings. Our results show a disequilibrium in gender ratios among authors who contributed equally from expected ratios had the ordering been done randomly or alphabetical. Given the importance of the first author position in assigning credit for a publication, the finding of fewer than expected females in associations involving shared contributions raises concerns about women not receiving their fair share of expected credit. The results suggest a need for journals to request clarity on the method used to decide author order among individuals claiming to have made equal contributions to a scientific publication.
biorxiv scientific-communication-and-education 100-200-users 2018Female grant applicants are equally successful when peer reviewers assess the science, but not when they assess the scientist, bioRxiv, 2017-12-13
ABSTRACTBackgroundPrevious research shows that men often receive more research funding than women, but does not provide empirical evidence as to why this occurs. In 2014, the Canadian Institutes of Health Research (CIHR) created a natural experiment by dividing all investigator-initiated funding into two new grant programs one with and one without an explicit review focus on the caliber of the principal investigator.MethodsWe analyzed application success among 23,918 grant applications from 7,093 unique principal investigators in a 5-year natural experiment across all investigator-initiated CIHR grant programs in 2011-2016. We used Generalized Estimating Equations to account for multiple applications by the same applicant and an interaction term between each principal investigator’s self-reported sex and grant programs to compare success rates between male and female applicants under different review criteria.ResultsThe overall grant success rate across all competitions was 15.8%. After adjusting for age and research domain, the predicted probability of funding success in traditional programs was 0.9 percentage points higher for male than for female principal investigators (OR 0.934, 95% CI 0.854-1.022). In the new program focused on the proposed science, the gap was 0.9 percentage points in favour of male principal investigators (OR 0.998, 95% CI 0.794-1.229). In the new program with an explicit review focus on the caliber of the principal investigator, the gap was 4.0 percentage points in favour of male principal investigators (OR 0.705, 95% CI 0.519- 0.960).InterpretationThis study suggests gender gaps in grant funding are attributable to less favourable assessments of women as principal investigators, not differences in assessments of the quality of science led by women. We propose ways for funders to avoid allowing gender bias to influence research funding.FundingThis study was unfunded.
biorxiv scientific-communication-and-education 500+-users 2017Assessing the Landscape of U.S. Postdoctoral Salaries, bioRxiv, 2017-12-04
AbstractPurposePostdocs make up a significant portion of the biomedical workforce. However, data about the postdoctoral position are generally scarce, including salary data. The purpose of this study was to request, obtain and interpret actual salaries, and the associated job titles, for postdocs at U.S. public institutions.MethodologyFreedom of Information Act Requests were submitted to U.S. public institutions estimated to have at least 300 postdocs according to the National Science Foundation’s Survey of Graduate Students and Postdocs. Salaries and job titles of postdoctoral employees as of December 1st, 2016 were requested.FindingsSalaries and job titles for over 13,000 postdocs at 52 public U.S. institutions and 1 private institution around the date of December 1st, 2016 were received, and individual postdoc names were also received for approximately 7,000 postdocs. This study shows evidence of gender-related salary discrepancies, a significant influence of job title description on postdoc salary, and a complex relationship between salaries and the level of institutional NIH funding.ValueThese results provide insights into the ability of institutions to collate actual payroll-type data related to their postdocs, highlighting difficulties faced in tracking, and reporting data on this population. Ultimately, these types of efforts, aimed at increasing transparency, may lead to improved tracking and support for postdocs at all U.S. institutions.
biorxiv scientific-communication-and-education 100-200-users 2017A design framework and exemplar metrics for FAIRness, bioRxiv, 2017-11-28
Abstract“FAIRness” - the degree to which a digital resource is Findable, Accessible, Interoperable, and Reusable - is aspirational, yet the means of reaching it may be defined by increased adherence to measurable indicators. We report on the production of a core set of semi-quantitative metrics having universal applicability for the evaluation of FAIRness, and a rubric within which additional metrics can be generated by the community. This effort is the output from a stakeholder-representative group, founded by a core of FAIR principles’ co-authors and drivers. We now seek input from the community to more broadly discuss their merit.
biorxiv scientific-communication-and-education 100-200-users 2017Explanation implies causation?, bioRxiv, 2017-11-14
AbstractMost researchers do not deliberately claim causal results in an observational study. But do we lead our readers to draw a causal conclusion unintentionally by explaining why significant correlations and relationships may exist? Here we perform a randomized study in a data analysis massive online open course to test the hypothesis that explaining an analysis will lead readers to interpret an inferential analysis as causal. We show that adding an explanation to the description of an inferential analysis leads to a 15.2% increase in readers interpreting the analysis as causal (95% CI 12.8% - 17.5%). We then replicate this finding in a second large scale massive online open course. Nearly every scientific study, regardless of the study design, includes explanation for observed effects. Our results suggest that these explanations may be misleading to the audience of these data analyses.
biorxiv scientific-communication-and-education 100-200-users 2017The reproducibility of research and the misinterpretation of P values, bioRxiv, 2017-06-01
AbstractWe wish to answer this question If you observe a “significant” P value after doing a single unbiased experiment, what is the probability that your result is a false positive?. The weak evidence provided by P values between 0.01 and 0.05 is explored by exact calculations of false positive risks.When you observe P = 0.05, the odds in favour of there being a real effect (given by the likelihood ratio) are about 31. This is far weaker evidence than the odds of 19 to 1 that might, wrongly, be inferred from the P value. And if you want to limit the false positive risk to 5 %, you would have to assume that you were 87% sure that there was a real effect before the experiment was done.If you observe P =0.001 in a well-powered experiment, it gives a likelihood ratio of almost 1001 odds on there being a real effect. That would usually be regarded as conclusive, But the false positive risk would still be 8% if the prior probability of a real effect were only 0.1. And, in this case, if you wanted to achieve a false positive risk of 5% you would need to observe P = 0.00045.It is recommended that the terms “significant” and “non-significant” should never be used. Rather, P values should be supplemented by specifying the prior probability that would be needed to produce a specified (e.g. 5%) false positive risk. It may also be helpful to specify the minimum false positive risk associated with the observed P value.Despite decades of warnings, many areas of science still insist on labelling a result of P < 0.05 as “statistically significant”. This practice must contribute to the lack of reproducibility in some areas of science. This is before you get to the many other well-known problems, like multiple comparisons, lack of randomisation and P-hacking. Precise inductive inference is impossible and replication is the only way to be sure,Science is endangered by statistical misunderstanding, and by senior people who impose perverse incentives on scientists.
biorxiv scientific-communication-and-education 200-500-users 2017