In peer review for scientific and technical papers, peer reviews occasionally discuss the paper authors directly. In this post I argue that this behavior can be toxic: it harms the review process, the authors, and, potentially, the community as a whole. This particular problem can be prevented by one easy guideline, one already used in some communities: paper reviews should be about the paper, not the authors.

This follows my previous blog post, where I reviewed “non-violent communication” (NVC). NVC offers a way to communicate and find common ground by separating observable facts and feelings, while avoiding judgments of other people. For this discussion, a “violent communication” is any statement that imposes an unwanted opinion on someone else, like telling someone that they’re always late or rude. As I discuss in that blog post, even compliments and unsolicited advice can be non-violent in some contexts. Additionally, I discuss the importance of talking about feelings, something many of us are trained to avoid, whether from childhood or scientific and technical training.

Even if you’re not specifically interested in scientific paper reviews, this post may provide an interesting discussion of how thinking about feelings and communication can be useful in a technical context.

Here, I begin by describing two “violent” paper reviews that I’ve gotten in the past few years. The second one was especially nasty, and spurred this blog post. In this discussion, I’ll particularly focus on feelings here—mine, and those of the reviewers. In a few places, I will speculate on the reviewers’ feelings and identities in order to make my points, and this would constitute violent communication, and so would not be conducive to finding common ground in a discussion with these reviewers.

My first toxic review

Since 2019, I’ve been submitting papers to journals in vision science (i.e., the psychology and neuroscience of human vision), ƒar outside of my background in computer science. Overall, my experience has been extremely positive, both in the review process, and with the vision scientists who have given me feedback and guidance on manuscripts. Reviewers have often misunderstood my submissions and written skeptical reviews, sometimes even leading to rejection. This is normal and useful: these misunderstandings give clues about problems in the paper, which I can then fix. But there have also been some toxic reviews.

My very first vision science submission got one violent review, based on misunderstanding the manuscript. Here’s a typical paragraph from it (typos included):

Hertzmann solves the problem by considering a world with no shadows or highlights or colour patches. …. There is a logical problem here. How can the observer imagine the world and optical structure of the Hertzmann conjecture without solving the problem of distinguishing the shadow, highlight and colour borders from the borders due to « edges « – ie the depth and slant changes ? In short, he assumes the solution in order to reach the solution. This is a classic error in reasoning.

Here, the anonymous reviewer has failed to understand the argument in the paper, and then made wrong statements about my argument. The reviewer writes their wrong judgments with total confidence.

Moreover, the review refers to me by name, and calls my conjecture by my name (I had not given it a name in this version). This make the entire evaluation personal—they make it about me—for no apparent reason.

This reviewer concluded with some advice:

Hertzmann shows good understanding of a wide variety of techniques for analyzing images. He could call on this knowledge to create a good description of the problem he is trying to solve, and to bring to bear novel, new ideas in the literature, to see how far one can get with these today. He might look up the research on lines and shadows, now some ten years old. …

Perhaps the reviewer thought they were being helpful by giving a compliment and advice. But, in so doing, the reviewer implies that I am ignorant of the literature, that I do not know how to choose research problems, or how to write a paper. What sounds like well-meaning advice is offensive and insulting. I want to say back: who are you to tell me what to do?

The paper was rejected. I revised it based on the questions and misunderstandings the reviews, and resubmitted it a month later. After a few more rounds of revision, the paper was published in Perception in 2020. I did not follow any of that reviewer’s bad advice. The fact that the paper was published feels vindicating: the reviewer was wrong about the core of the paper, and wrong about me.

But the review should not have been about me.

It’s normal for a reviewer to not understand a manuscript’s argument; seeing how readers misunderstand a paper is invaluable information for authors. I work very hard to understand reviewer comments and I heavily revise papers accordingly.

But it is not okay that the reviewer judges me, personally. This is violent communication. Not only were their comments inaccurate, they are toxic: they hurt, emotionally. They are insulting, they are painful to read. They make me want to respond with my own violent communcation—to say that the reviewer is an asshole—and to discount everything they say. The violent communication is counter-productive: it’s hard to learn much from someone when they are hurting you. (This is also one reason that dog trainers have moved to training based on positive reinforcement instead of negative reinforcement.)

Toxic communication harms the community. When editors allow these kinds of comments, they send the message that this kind of reviewing is acceptable, even encouraged. Toxic communication poisons open communication and can even drive people away, especially students. I feel insecure about my ability to participate in vision science research, and feel like an outsider.

Authors need to feel safe to take the risk to submit papers. If, when I first submitted vision science papers, all the reviews had been like this, then I surely would have given up writing vision science papers entirely.

My second toxic review

My coauthors and I recently received a far worse review from Journal of Vision:

These are three fine questions, but the authors, 1) seem to be largely ignorant of the vast literature on visual appearance (and in particular image appearance) that has for decades addressed and made progress on these issues, and 2) seem to be so tied to the beliefs articulated in their conclusion that they have designed experiments (if they can actually be called that) that are a confused mess.

That is, they called us ignorant and described our beliefs. Later in the review they continue to describe our mental states and beliefs. They later commented directly on our training: “A quick review of the authors’ CVs show that they variously have training in Computer Science and Art History. … The authors badly need help in [vision science]”.

The reviewer concluded with:

I have reached the end of my patience with this paper. It is a poorly conceived, designed, executed, and documented monstrosity and I regret that I will never get back the time I have spent reading and reviewing it. [The first author] can be forgiven for their ignorance, but [the senior authors and another person] know better and should be embarrassed that they authorized the submission of this manuscript.

Here the reviewer again describes us as ignorant, makes baseless assumptions about how the paper was written, says we should feel bad, and, if we read the review correctly, seems to attack another person that isn’t even an author (i.e., the first author’s PhD advisor).

The reviewer made many useful criticisms; our submission had many problems. But that doesn’t explain the intense emotions that the reviewer reports. The review gives the impression of someone barely suppressing rage.

Here’s my theory: the reviewer misunderstood one of our claims about the literature—they thought we were saying that no one had worked on these problems. They took it as an insult to their work. Instead of recognizing their hurt (or questioning their interpretation, or even noticing that we actually did cite papers and books from this literature), they lashed out at at us. The genuine flaws in our manuscript just added fuel to the fire. But that’s just one possibility.

In turn, this review made us very angry and upset, and we are still mad and hurt. Even writing this blog post has made me upset.

I shared the review with some vision scientists colleagues. They sympathized, and said they’ve received reviews like this too. They speculated that these kinds of reviews come from scientists who did a lot of psychophysics in the 70s and 80s and now feel left behind by the field.

The harsh reviewer might think that they’re justified in being harsh—as if having a paper rejected, and seeing that it needs a major overhaul, isn’t negative feedback enough. I could imagine legitimate arguments that some papers should not be submitted. But, if that’s what they believed, then the reviewer should have made those arguments, instead of just giving emotional abuse.

I have speculated about the reviewer’s emotions, state, and background in order to make a point. In an actual conversation, this would all be violent conversation, and thus not constructive to finding common ground.

We have a ton of work ahead of us, to revise and overhaul the work, making use of the information in the reviews. We will certainly try to prevent these misunderstandings in our revisions. But it’s so much more difficult to study this review when it makes us angry and upset. As I said before, it’s hard to learn much from someone when they’re hurting you.

One final note: my discussion here has focused on the role of feelings. A reviewer feels threatened or insulted by a paper, so they write an angry review attacking the authors. The reviewer has bad emotions and tells the authors that they should feel bad. The authors in turn feel angry and upset by the review and then struggle to incorporate the feedback and figure out the next steps. The reviewers discussed the manuscript in terms of the authors’ thought processes, thereby making the reviews unnecessarily personal, and sometimes insulting (especially when they were wrong).

Reviewing guidelines

I believe that scientific reviewing should include guidelines to prevent these kinds of reviews. The most simple principle is:

Reviews should be about the paper, not the authors.

The reasoning is simple: reviews are judging the paper for acceptance. Nothing about the authors is relevant to this decision. Moreover, directly discussing the authors makes the process personal, and can often be insulting to authors. Even compliments can come off as personal judgments.

Most places that I have published do have explicit rules about this. For example, the SIGGRAPH policy states:

Belittling or sarcastic comments, or comments on authors’ personalities, have no place in the reviewing process. Please evaluate the work, not the authors. The most valuable comments in a review are those that help the authors understand the shortcomings of their work and how they might improve it. …

(I might have contributed to this, I don’t remember). The CVPR policy uses almost the same wording. These policies are enforced by the papers committee: if a review violates the policy, the reviewer will be required to fix it before it is sent to the authors.

This seems like it’s obviously the right thing to do. The primary purpose of paper reviewing is to decide whether to publish the paper, and the secondary purpose is to give useful feedback on the paper manuscript to the authors. Discussing the authors does not further either goal. Moreover, the reviewer does not have enough information to say anything useful and meaningful about the authors. Such statements would just end up being violent communication.

Moreover, I do think that this rule leads to better reviews because it reminds reviewers to focus on the paper and its contents, and not get distracted by the authors’ identities and judging their competence. (Blind review is even better for this.)

The whole point of peer review is to judge a paper on its merits, not the authors’ supposed merits.

For as long as I can remember, I have always followed a stricter rule for my own paper reviewing:

Reviews should never mention the authors.

I don’t ever remember seeing violent reviews in computer science, and I don’t remember ever having to enforce the rule myself as an Area Chair/Editor. I think that simply having the guideline creates a more civil review culture.

One corollary I follow is to not address the authors directly (“you”). I often see reviews written in second person (“you did X, why?”). This, too, feels unnecessarily personal, mismatched to the primary purpose of the review: to inform the editor/committee/reviewers about whether to accept the paper.

Obviously, making review processes double-blind would also improve these reviews (as they are in CS), non-blind reviewing has been shown to create biased reviews in many studies, e.g., 1, 2). It seems bizarre to me that there remain some non-blind reviewing processes.

Exceptions can be made for certain cases, e.g., opinion and perspective pieces often reflect an author’s viewpoint and their identity may be relevant for these.

Reviewing style

As a reviewer, how can you best implement these guidelines? As a reviewer, I find it very easy: instead of “the authors say X”, I write “the paper says X.” I use passive voice when needed (e.g., “in the first experiment, participants were asked to …”) And, of course, I avoid any temptation to mention the authors’ own motivation, knowledge, background, and so on.

For example, I might restate the initial quote from my first toxic review (above) as:

As I understand it, the paper solves the problem by considering a world with no shadows or highlights or colour patches. …. But there seems to be a logical problem here. How can the observer imagine the world and optical structure of the paper’s conjecture without solving the problem of distinguishing the shadow, highlight and colour borders from the borders due to « edges « – ie the depth and slant changes ? It appears the paper assumes the solution in order to reach the solution.

There’s also some added uncertainty here—maybe I misunderstood the paper? It’s a lot more likely that a manuscript is unclear than that the authors genuinely mean something nonsensical.

If you really feel compelled the discuss the authors in reviews, then it’s worth seriously examining why, and whether doing so is actually beneficial.

This advice does contradict other advice I would give, like avoiding passive voice. In this case, I think avoiding violent communication is more important. Also, when writing about a paper that is published, I might mention the authors as saying/writing whatever they claim; it feels different when it’s a published paper versus an unpublished manuscript. My only rationalization for this is that a manuscript is just a draft, and the power relationship is different between an anonymous reviewer and an author of a submitted manuscript, versus the author of a published paper and someone commenting on it afterward.

One other factor is that I sometimes write theoretical papers with some first person in order to avoid passive voice (including that Perception paper), which might encourage reviewers to discuss the author, but I still argue that the review should be able the arguments.

Reviewing and Non-Violent Communication (NVC)

The core argument of this blog post could be expressed in terms of the four steps of NVC:

  1. What I observed in the paper reviews I received
  2. How I felt after receiving the reviews
  3. My needs. I need to feel “safe” from attack when submitting work for publication, and I believe other authors do too.
  4. My request: create and enforce reviewing guidelines not to mention authors in reviews.

I also claim that the rule leads to better reviews and better reviewing culture overall.

I am not advocating the full use of NVC in reviews. In this post I have discussed NVC as a way of understanding (a) how one’s feelings influence the process, and (b) what not to include in a paper review, just because it becomes counter-productive.

I think similar ideas can be helpful in other kinds of scientific and technical communication as well.


Thanks to Rich Radke and Maneesh Agrawala for comments on this blog post, and to the vision scientists who have given me advice and commisseration on submitting to vision science journals.