How to review a manuscript.

The purpose of a peer review is to either give suggestions that improve the paper, accept the paper as is, or reject the paper as it has too many holes to be fixed by the peer review process. Ultimately, it is the job of the Editor to decide whether a manuscript will be accepted or rejected, but your careful reading and suggestions for areas to improve or clarify will generally play a large role in that decision.

Reviews should be kind. Some people sign all of their reviews, although this has problems. At least the review should be written in such a way as to have your name attached to it if necessary. Neil Lawrence has said that there are two types of reviewers: gate keepers, who guard the journal and literature from lower quality publications, and community builders, who want the research world to be exposed to new ideas and clear research. While being a community builder is certainly harder, I try to only accept review requests where I can make the paper better and not try to weed it from the literature. This is a harder thing to do, but essential. Similarly, when authors do not take my concerns seriously, I decline to re-review a paper -- reviewing is a two-way conversation.

Review Criteria

Generally there are a number of criteria upon which a review is based.

Technically Sound

As a peer reviewer, your main review criterion is to determine if the methods and the data are technically sound. The editor generally does not have the technical background to evaluate each paper, but you do. The editor will rely heavily on your opinions and ideas in this area, and this is by far the most important aspect of a review.

  • Are the data appropriately QC’d? Are the outliers controlled for?

  • Are the model assumptions appropriate to make in this context?

  • Were the number of significant results what you expect? Why or why not? Was there a proper experimental control? Was the null hypothesis evaluated appropriately, and FDR evaluated in a reasonable way?

  • Are the conclusions warranted from the results? This is a big question that weaves throughout the Results section.

  • Simulation results: Is the method better on simulated data? Were important evaluations removed or avoided? Was the simulation difficult, and reasonable? Were the comparison methods really state-of-the-art and equivalent?

  • Real results: what were the conclusions? Do you believe the results?

Much of this is thoughtful reflection on the ideas presented in the paper. Take your time with interesting papers to really understand how they came to the conclusions they did and what this means in terms of the bigger picture. Your feedback really will make a difference.

Originality

A second criterion is the originality of the manuscript. This can mean a number of things. Are the methods original, the data original, or the conclusions original? Some people disagree that originality should be a criterion of peer review. Regardless, proper domain-specific and methodological context should be a part of every paper to put the contributions of the paper in perspective.

  • What are related papers? Are there any obvious fields missing from the related papers? Are they presented fairly? Are the discrepancies in these methods true?

  • Is the contribution of the current manuscript appropriate, or an oversell? Is it stated clearly?

  • Are the results original? Are they clearly presented in the context of domain-specific results from prior work? Are the original conclusions based on true data?

  • Are all relevant works properly cited?

I reviewed a paper where the method was not compared in contrast to many related methods, and it was quickly rejected. All it takes is a Google scholar search to determine if others have had similar ideas before. Context is critical to explaining originality.

That said, many reviews (especially for conferences like NeurIPS) say: “this model has two parts: A and B. Our community has studied both A and B well. There is nothing new here." This is the worst kind of review. It is lazy and unactionable. Have people every tried to put A and B together before? Is this a careful study of what happens when A and B are brought together? Is this model well motivated by the specific data application? Do they show that it really works on real data applications? If so, this is entirely original.

Manuscript presentation and layout

If the ideas in the paper are not well presented, it should not be published (this is my opinion, and you can disagree).

  • Is the paper clearly written and understandable? are all words, acronyms, abbreviations defined? Is technical jargon explained clearly? Do you have to re-read sentences or paragraphs multiple times to get the point?

  • Are the Figures and Tables purposeful and helpful to the presentation? Are all axes labeled correctly? Are there simple ways to make them more helpful to the reader?

  • Are the equations appropriate in the context of the paper? Are all variables well defined?

  • is the structure of the paper appropriate? Are sections missing? Are concepts (e.g., Methods) explained clearly after they are used to draw conclusions from?

  • Are there claims that they do not support with clear evidence?

  • Are essential proofs missing?

Some references