I know it is a publish or perish world. Despite my liberal arts background, where professors are not required to obtain grants to survive, I’ve recognized the importance of publishing, both for one’s own career growth, but also as the responsibility of the researcher to disseminate one’s findings. However, I had always believed that effort and experimental “luck” were the limiting factors. If you worked hard and there weren’t any experimental catastrophes (e.g. virus in the animal colony, bad antibodies, a poorly timed fire alarm), then it would be within your control to publish (of course, getting published in a high tier journal may be considered a long-shot). I can live with the fact that my findings may not be worthy of being printed in the elusive pinnacle of scientific journals, because if that’s my goal, then I need to redesign my experiments, shift my expectations, and add a few more difficult and expensive techniques. But when all the ducks are in line, it’s terribly disappointing to potentially lose out on a highly sought after publication due to factors beyond your control, such as a rogue, “unbiased” reviewer.
After years of data collection and over a month of writing, revisions, more revisions, re-writings, I submitted my manuscript, blindly optimistic at its success and believing in the simple formula: effort + time + creativity = publication. When you put your best product on the table why wouldn’t you be optimistic? The next couple of weeks were spent obsessively checking the “manuscript status” page on the journal’s website, waiting patiently as the status changed from “sent to editor” to “manuscript under review” and finally “under editorial review”, each step a small hurdle in this critical race. Waiting, waiting, waiting. Why is it taking so long? Sure, the reviewers have other jobs, other deadlines, their own papers and grants to write. I get it; this is the peer review process at work, the go-to credibility for any debate pitting scientific evidence against perception or belief (e.g. “Well, the peer reviewed literature claims…”). It’s a valuable process, but slow. Three weeks, four weeks, five weeks. Getting anxious.
Over a month and a half after submission, the long awaited mailbox ping rips through the speakers. I get taken back to senior year in high school, opening up that college admission letter. (Why don’t all acceptance letters start with one word: either “yes” or “no”? Then we can weed through the formalities and decide whether to post to the fridge or toss in the shredder). Alas, I opened the email to reveal the manuscript’s fate.
Throughout the waiting period, I had ridden the roller coaster of anticipated manuscript fate, undulating from “of course it will be accepted” to “not a chance in hell will I ever see my name printed in this journal”. But rarely did this roller coaster end up on the flats back at the starting platform, which is exactly what the email indicated would be the status of the publication: it never left the gate. All the waiting, the anxiety, the vicissitudes of expectation and the outcome was a draw. Included in the letter was the option that, if we want, we can address the reviewers’ concerns and resubmit only to go through the whole process again. So there, not accepted, nor denied. Not even an “accepted pending moderate/major revisions” or to be accredited with an “original submission date”. Instead, it was a reset, leaving me with that empty feeling you can imagine after watching a championship game that ends in a tie. Sure, it could have been worse as it might have been outright rejected, but my immediate thoughts turned to how the fate of the manuscript came to be; what were the reviewers’ comments that led to the status ambiguity?
Attached to the email were the reviewers’ comments. They open to reviewer #1. Okay, pretty good; some slight editorial issues, a statistical addition, but an easy fix. Reviewer #2, also positive; similar comments to reviewer #1 and this one wants a dose response curve. No problem, give me two weeks. But these issues are far from grounds for rejection, not by a long shot; they appeared to give it the green light. A quick calculation: out of the three reviewers, the first two were positive, so I already don’t like reviewer #3. I quickly skim through reviewer #3’s comments, got confused, then re-read them more closely to ascertain if this individual actually read the whole manuscript. Quickly, it became clear what type of battle lay ahead. This reviewer did not raise arms against my experiments, my protocols, or even my data, but instead was fighting the novelty of the findings. Thus, the comments reflected the belief that “these results can’t be true since they don’t match the available literature, so there must be something wrong”. Yes, I was publishing on a previously undetected response to alcohol that was opposite to dogma, but we used a different animal model and confirmed the previously published results in the old animal model as well. So where’s the beef? One would hope that the reviewer’s comments would suggest experiments to address their concerns or expose experimental flaws that may have led to erroneous data interpretation. Some specificity would have been welcome! Not the case with reviewer #3. Other comment’s posed by this reviewer indicated their failure to completely, or closely read the paper. For instance, particular comments made by #3 were actually addressed within the manuscript itself, or they spewed false claims that were made when none such existed. This vendetta against our conclusions was not supported by proposing potential errors in our science, but instead, by trying to find error where none existed (you may have thought they would have at least picked up on some of the actual issues addressed by reviewer’s #1, and #2!). Importantly, this also exposed a bias and an aversion to change that unfortunately many of us hold.
As scientists, we cannot fear change to our scientific understanding or we risk progress becoming an illusion. Yet, we must be critical in our assessment as to why this change in our understanding occurred. How did we miss it before? I respect reviewer #3 for being skeptical when contradicting ideas that challenged their understanding, but my respect diminished when stubbornness impeded their ability to be a critical unbiased scientist. I’m not implying that this manuscript couldn’t be improved; Reviewers #1 and #2 raised valid points which pointed to the manuscript being far from perfect. However, in this competitive scientific climate, rejection due to subjective and unfounded responses from one individual carries more weight than a simple disagreement. So now I’m left back at the drawing board, improving the product, but trying to be a better marketer. There are many more reviewer #3s out there in the world, and one of the goals of our scientific educations should be to remain open enough to judge good science based on the experimental procedures and unbiased interpretation of results, not on their adherence to preconceived expectations. Great truths often begin as blasphemy and we must be willing to accept empirically supported deviations to our beliefs. And if we can’t do this, then we should be respectful enough to withdraw ourselves in the hope of not impeding in the progress of others.