When Standards Go Wild: Software Review for a Manuscript

Post provided by STEFANIE BUTLAND, NICK GOLDING, CHRIS GRIEVES, HUGO GRUSON, THOMAS WHITE, HAO YE

This post is published on the rOpenSci and Methods in Ecology and Evolution blogs

Stefanie Butland, rOpenSci Community Manager

Some things are just irresistible to a community manager – PhD student Hugo Gruson’s recent tweets definitely fall into that category.

I was surprised and intrigued to see an example of our software peer review guidelines being used in a manuscript review, independent of our formal collaboration with the journal Methods in Ecology and Evolution (MEE). This is exactly the kind of thing rOpenSci is working to enable by developing a good set of practices that broadly apply to research software.

But who was this reviewer and what was their motivation? What role did the editors handling the manuscript play? I contacted the authors and then the journal and, in less than a week we had everyone on board to talk about their perspectives on the process. Continue reading

Software Review Collaboration with rOpenSci

© The rOpenSci Project, 2017

The role of science journals is to publish papers about scientific research. We need to maintain some quality in what is published, so we use peer review, and ask experts in the subject of a paper to read it and check that it is correct, the arguments make sense etc.

One of the types of paper we publish is Applications, most of which describe software that will help ecologists and evolutionary biologists to do their research. Our focus is on the paper itself, but we also want to be confident that the software is well written, e.g. that it has no obvious bugs, and that it is written so that future versions will not break.

Of course, it takes a lot of time to thoroughly review software, and that is not the primary job of the journal’s peer review process. But we appreciate that this needs to be done, and indeed many of our reviewers and editors put a lot of time into doing just this, something we really appreciate. But can we do this better?

Fortunately, we were approached by the rOpenSci organisation, who wanted to collaborate with us to do this (a huge thanks to Scott Chamberlain for this initial approach and all of his hard work in putting this collaboration together). They are a group of coders, mainly in ecology, who have written a large number of open source R packages for a variety of tasks (e.g. importing data, visualisation). They also want to maintain good quality code, so they have implemented a variety of methods to do this.

One of these is code review. This is another form of peer review, but focused on the code, not the paper. This means the reviewer can concentrate on checking that the code works, that it is well written and documented (so other people can read the code and adapt it), and that it has the right sets of tests, so that if something changes, it is straightforward to check that it still works. Continue reading