- Published: January 16, 2022
- Updated: January 16, 2022
- University / College: Anglia Ruskin University
- Language: English
- Downloads: 3
In the original article, there were several errors.
A correction has been made to Introduction , paragraph one:
The most groundbreaking, transformative research results deserve a broad readership and a large audience. Therefore, scientists submit their best work to the journals with the largest audience. While the number of scientists has been growing exponentially over the last decades, the number of journals with a large audience has not kept up, neither has the number of articles published per journal. Consequently, the acceptance rates for the most prestigious journals have fallen below 10% and the labor of rejecting submissions has become these journals’ largest cost item. Assuming that this exclusivity allows the journals to separate the wheat from the chaff, successful publication in these journals is treated as a quality signal in hiring, promotion, and funding decisions. If anything, these developments have fueled the circularity of this relationship: today, publishing ground-breaking science in a high ranking journals is not only important for science to advance but also for an author’s career to advance. Even before science became hypercompetitive at every level, now and again results published in prestigious journals were later found to be false. This is the nature of science. Science is difficult, complicated, and perpetually preliminary. Science is self-correcting and better experimentation will continue to advance science to the detriment of previous experiments. Today, however, fierce competition exacerbates this trait and renders it a massive problem for scholarly journals. Now it has become their task to find the ground-breaking among the too-good-to-be-true data, submitted by desperate scientists, who face unemployment and/or laboratory closure without the next high-profile publication. This is a monumental task, given that sometimes it takes decades to find that one or the other result rests on flimsy grounds. How is our hierarchy of more than 30, 000 journals holding up?
A correction has been made to Introduction , paragraph two:
At first glance, it appears as if our journals fail miserably. Evaluating retractions, the capital punishment for articles found to be irreproducible, it was found that the most prestigious journals boast the largest number ( Fang and Casadevall, 2011 ) and that most of these retractions are due to fraud ( Fang et al., 2012 ). However, data on retractions suffer from two major flaws which make them rather useless for answering questions about the contribution of journals to the reliability (or lack thereof) of our scholarly literature: (1) retractions cover only about 0. 05% of the literature; and (2) they are confounded by error-detection variables that are hard to trace. So maybe our journals are not doing so horribly after all?
A correction has been made to Statistical Power in Neuroscience/Psychology , paragraph one:
Statistical power (defined as 1—a, where a denotes the type II error rate; a measure computed from sample size and effect size) allows inference as to the likelihood that a nominally statistically significant finding actually reflects a true effect. As such, statistical power is directly related to the reliability of the experiments conducted. Button et al. (2013) analyzed the statistical power of 730 individual primary neuroscience studies. These data do not show any correlation with journal rank ( Brembs et al., 2013 ; Figure 3).
The author apologizes for these errors and states this corrigendum does not change the scientific conclusions of the article in any way.
The original article has been updated.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
Brembs, B., Button, K., and Munafò, M. (2013). Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7: 291. doi: 10. 3389/fnhum. 2013. 00291
PubMed Abstract | CrossRef Full Text | Google Scholar
Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., et al. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14, 365–376. doi: 10. 1038/nrn3475
PubMed Abstract | CrossRef Full Text | Google Scholar
Fang, F. C., and Casadevall, A. (2011). Retracted science and the retraction index. Infect. Immun. 79, 3855–3859. doi: 10. 1128/IAI. 05661-11
PubMed Abstract | CrossRef Full Text | Google Scholar
Fang, F. C., Steen, R. G., and Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proc. Natl. Acad. Sci. U. S. A. 109, 17028–17033. doi: 10. 1073/pnas. 1212247109
PubMed Abstract | CrossRef Full Text | Google Scholar