Making Your Research Reproducible with R

Post provided by Laura Graham

tweetReproducible research is important for three main reasons. Firstly, it makes it much easier to revisit a project a few months down the line, for example when making revisions to a paper which has been through peer review.

Secondly, it allows the reader of a published article to scrutinise your results more easily – meaning it is easier to show their validity. For this reason, some journals and reviewers are starting to ask authors to provide their code.

Thirdly, having clean and reproducible code available can encourage greater uptake of new methods. It’s much easier for users to replicate, apply and improve on methods if the code is reproducible and widely available

Throughout my PhD and Postdoctoral research, I have aimed to ensure that I use a reproducible workflow and this generally saves me time and helps to avoid errors. Along the way I’ve learned a lot through the advice of others, and trial and error. In this post I have set out a guide to creating a reproducible workflow and provided some useful tips. Continue reading “Making Your Research Reproducible with R”

RSS Meeting on Model Averaging: Elephants, Oscars and Spiky Data

Post provided by Dr Eleni Matechou

Eleni is a Lecturer in Statistics and a member of the Statistical Ecology @ Kent (SE@K) group at the University of Kent. She develops statistical models motivated by ecological applications to study populations of birds, insects and, more recently, humans.

On September 15th 2016, a half-day meeting on Model Averaging – organised by the Environmental Statistics section and the East Kent local group of the Royal Statistical Society (RSS) – took place at the University of Kent in Canterbury .

There were three invited speakers: Professor Richard Chandler, from University College London, Professor Jonty Rougier, from the University of Bristol and Dr Kate Searle, from the Centre for Ecology and Hydrology, who presented via Skype.

All three talks included interesting motivating data, clever modelling and great insight.

Taming the Pachyderm

elephant-in-the-roomProfessor Richard Chandler presented joint work with Marianna Demetriou on “The interpretation of climate model ensembles”. Projecting future global temperatures is clearly a timely topic and Richard’s talk highlighted the challenges of doing this reliably. And they’re certainly not minor challenges, in his own words, this is a problem he has spent 10 years thinking about! Continue reading “RSS Meeting on Model Averaging: Elephants, Oscars and Spiky Data”

Animal Density and Acoustic Detection: An Interview with Ben Stevenson

David Warton (University of New South Wales) interviews  interviews  Ben Stevenson (University of St Andrews) about his 2015 Methods in Ecology and Evolution paper ‘A general framework for animal density estimation from acoustic detections across a fixed microphone array’. They also discuss what Ben is currently up to, including an interesting new method for dealing with uncertain identification in capture-recapture, published in Statistical Science as ‘Trace-Contrast Models for Capture–Recapture Without Capture Histories’.

Continue reading “Animal Density and Acoustic Detection: An Interview with Ben Stevenson”

Peer Review Week: Should we use double blind peer review? The evidence…

Non-blind Peer Review Monster

This week is Peer Review Week, the slightly more popular academic celebration than pier review week. Peer review is an essential part of scientific publication and is – like Churchill’s democracy – the worst system to do it. Except for all of the others. The reason it’s imperfect is mainly that it’s done by people, so there is a natural desire to try to improve it.

One suggestion for improvement is to us double blind reviews. At the moment most journals (including Methods in Ecology and Evolution) use single blind reviewing, where the author isn’t told the identity of the reviewers. The obvious question is whether double blind reviewing does actually improve reviews: does it reduce bias, or improve quality? There have been several studies in several disciplines which have looked at this and related questions. After having looked at them, my summary is that double blind reviewing is fairly popular, but makes little or no difference to the quality of the reviews, and reviewers can often identify the authors of the papers.

Continue reading “Peer Review Week: Should we use double blind peer review? The evidence…”

Next-Gen Peer Review: Solving Today’s Problems with Tomorrow’s Solutions

Post provided by Jess Metcalf and Sean McMahon

640px-scientificreview
Subject area experts are asked to review a lot of papers!

The primary challenge Associate Editors face is finding Reviewers for manuscripts. When times get desperate, it may feel like anyone with a pulse will do! But of course the reality is that Reviewers need some relevant expertise. They also need to be able to carve out time from busy schedules. These two requirements are remarkably efficient at eliminating every name on a list of candidate Reviewers.

This Reviewer drought slows down the publishing process, and frustrates and stresses all involved. It also runs the risk of affecting quality – busy experts have no time to contribute to reviews of papers in their area, so manuscripts end up being reviewed hastily or by people in adjacent fields. However, so much effort goes into writing a manuscript (even a bad one), and so much in science depends fundamentally on the peer review process, that finding the right Reviewers is an important academic – and even ethical – obligation as Editors.  Continue reading “Next-Gen Peer Review: Solving Today’s Problems with Tomorrow’s Solutions”

What Makes a Good Peer Review: Peer Review Week

For many academics, especially Early Career Researchers, writing a review can seem like quite a daunting task. Direct training is often hard to come by and not all senior academics have the time to act as mentors. As this week is Peer Review Week, we wanted to provide some advice on what makes a good review and what makes a bad review. This advice has been kindly provided by the Methods in Ecology and Evolution Associate Editors – all of whom are authors and reviewers as well.

The BES Guide to Peer Review in Ecology and Evolution
The BES Guide to Peer Review in Ecology and Evolution

Before we dive into the tips from our Editors though, we want to highlight one of the best resources for anyone looking for peer review guidance – the BES Guide to Peer Review in Ecology and Evolution. This booklet is intended as a guide for Early Career Researchers, who have little or no experience of reviewing journal articles but are interested in learning more about what is involved. It provides a succinct overview of the many aspects of reviewing, from hands-on practical advice about the actual review process to explaining less tangible aspects, such as reviewer ethics. You can get the PDF version of the guide (and the other BES guides) for free on the BES website. Continue reading “What Makes a Good Peer Review: Peer Review Week”

Peer review week: Encouraging collaborative peer review

Post from Managing Editor Emilie Aimé. Check out the methods.blog later in the week for some of the Methods in Ecology and Evolution Associate Editors’ perspective on collaborative peer review. It’s Peer Review Week 2016 and the BES journals are celebrating with a series of blog posts on how much we value our reviewers. Here at the BES we love Early Career Researchers. We give out grants… Continue reading Peer review week: Encouraging collaborative peer review

Thank You to All of Our Reviewers: Peer Review Week 2016

As many of you will already know, this week is Peer Review Week (19-25 September). Peer Review Week is a global event celebrating the vital work that is done by reviewers in all disciplines. To mark the week, we will be having a series of blog posts about peer review. The theme for this year’s Peer Review Week is recognition for review and we’re starting … Continue reading Thank You to All of Our Reviewers: Peer Review Week 2016

Issue 7.9

Issue 7.9 is now online!

The September issue of Methods is now online!

This month’s issue contains two Applications articles and three Open Access articles, all of which are freely available.

– Arborist Throw-Line Launcher: A cost-effective and simple alternative for collecting leaves and seeds from tall trees. The authors have also provided some tutorial videos on YouTube.

– ctmm: An R package which implements all of the continuous-time stochastic processes currently in use in the ecological literature and couples them with powerful statistical methods for autocorrelated data adapted from geostatistics and signal processing.

Continue reading “Issue 7.9”

New Associate Editor: Marie Auger-Méthé

Today, we are pleased to be welcoming a new member of the Methods in Ecology and Evolution Associate Editor Board. Marie Auger-Méthé joins us from Dalhousie University in Canada and you can find out a little more about her below.

Marie Auger-Méthé

Marie Auger-Méthé

“I am broadly interested in developing and applying statistical tools to infer behavioural and population processes from empirical data. My work tends to focus on marine and polar mammals, but the methods I develop are often applicable to a wide range of species and ecosystems. My recent work has centred on modelling animal behaviour using movement data and I generally analyse data with spatial and/or temporal structure.”

Marie has been reviewing for Methods in Ecology and Evolution for a few years and has contributed articles to some of the other journals of the British Ecological Society too. Earlier this month, her article titled ‘Evaluating random search strategies in three mammals from distinct feeding guilds‘ was published in the Journal of Animal Ecology. Continue reading “New Associate Editor: Marie Auger-Méthé”