Hackathons have become a regular feature in the data-science world. Get a group of people with a shared interest together, give them data, food, and a limited amount of time and see what they can produce (often with prizes to be won). Translated into the world of academia as research hackathons, these events are a fantastic way to foster collaboration, interdisciplinary working and skills sharing.
The Quantitative Ecology hackathon was an intense day of coding resulting in creative and innovative research ideas using social and ecological data. Teams worked through the day to develop their ideas with support from experts in R, open science and statistics. We ended up with five projects addressing questions from, ‘Who has the least access to nature?’ to ‘Where should citizen scientists go to collect new data?’.
Artificial intelligence (or AI) is an enormously hot topic, regularly hitting the news with the latest milestone where computers matching or exceeding the capacity of humans at a particular task. For ecologists, one of the most exciting and promising uses of artificial intelligence is the automatic identification of species. If this could be reliably cracked, the streams of real-time species distribution data that could be unlocked worldwide would be phenomenal.
Despite the hype and rapid improvements, we’re not quite there yet. Although AI naturalists have had some successes, they can also often make basic mistakes. But we shouldn’t be too harsh on the computers, since identifying the correct species just from a picture can be really hard. Ask an experienced naturalist and they’ll often need to know where and when the photo was taken. This information can be crucial for ruling out alternatives. There’s a reason why field guides include range maps!
Currently, most AI identification tools only use an image. So, we set out to see if a computer can be taught to think more like a human, and make use of this extra information. Continue reading →
We have now entered the era of artificial intelligence. In just a few years, the number of applications using AI has grown tremendously, from self-driving cars to recommendations from your favourite streaming provider. Almost every major research field is now using AI. Behind all this, there is one constant: the reliance, in one way or another, on deep learning. Thanks to its power and flexibility, this new subset of AI approach is now everywhere, even in ecology we show in ‘Applications for deep learning in ecology’.
But what is deep learning exactly? What makes it so special?
Deep Learning: The Basics
Deep learning is a set of methods based on representation learning: a way for machines to automatically detect how to classify data from raw examples. This means they can detect features in data by themselves, without any prior knowledge of the system. While some models can learn without any supervision (i.e. they can learn to detect and classify objects without knowing anything about them) so far these models are outperformed by supervised models. Supervised models require labelled data to train. So, if we want the model to detect cars in pictures, it will need examples with cars in them to learn to recognise them.
A warning:Halloween is nigh, and the following post contains graphic real-life imagery of maggot-eaten eye-sockets and deadly pianos. Read on… if you dare!
A Death in the Woods
In the vast and often frozen boreal forest of northern Canada there is a slow-burning forensic investigation into a death. The victim: a woodland caribou, an iconic species that is threatened or endangered throughout its range.
The scene is very much made for TV neo-Scandinavian neo-noir. From a not-too-luxurious regional office in the town of Fort Smith, just north of the Alberta border, over a steaming cup of coffee, world-weary biologist Allicia Kelly – who’s seen it all and then some – is monitoring the movements of collared animals on her computer screen. It’s the middle of May. The females, nearly all pregnant, are scattering to higher ground to find suitably cozy and secluded sites to calve. All is as peaceful and idyllic as a bunch of blips on a computer screen can be.
But then (cue slightly unsettling dissonance in the soundtrack) one of the little blips seems to have stopped moving. Kelly raises her eyebrow, tells herself to keep an eye out. A moment later she makes the call: “Team, we’ve got another ringer … let’s roll!” Continue reading →
As environmental managers, we’re frequently asked to make judgements about the relative health of the environment. This is often difficult because, by its nature, the environment is highly variable in space and time. Ideally, such judgements should be informed by robust scientific investigation, or more precisely, the reliable interpretation of the resulting data.
Type I and Type II Errors
Even with robust investigations and good data, our interpretations can sometimes be wrong. In general, this happens when:
the investigation concludes that an impact has occurred, when in fact it hasn’t (Type I error)
fails to detect an impact, when an impact has actually occurred (Type II error).
Understanding the circumstances that lead to these errors is unfortunately complicated, and difficult unless you have a strong statistical background. Continue reading →
“Man must rise above Earth to the top of the atmosphere and beyond, for only then will he fully understand the world in which he lives” – Socrates (469-399 BC)
Since the launch of the first Landsat mission in 1972, several new earth observation satellites made their way into Earth’s orbit. As of 2018, UNOOSA recorded an impressive 1980 active satellites. Of those, 661 were dedicated to earth observation. These numbers show how widespread the use of remote sensing technologies has become.
As space agencies recognised the scientific and economic value of satellite data, they made it open access. By doing so, they gave the scientific community the means to develop a growing variety of spatially explicit – and often temporally dynamic – data products on both the land and the atmosphere. Over the years, those of us studying movement ecology have greatly profited from it. Continue reading →
The ocean was once a limitless frontier, primed for exploitation of fish and other marine life. Today, a scan of the coastline (in our case off Australia and the US) shows an ocean landscape dotted with aquaculture pens, wind farms, eco-tours, and oil rigs, as well as commercial and recreational fishing boats. This presents marine and maritime managers with the huge challenge of balancing competing social, conservation, and economic objectives. Trade-offs arise even from success stories. For example, seal and sea lion populations are recovering from centuries of hunting, which is great. But now they’re preying heavily on economically valuable species like salmon and cod, creating potential tensions between fisheries and conservation communities. Ecosystem-based management is one way that we can start to address these trade-offs. Continue reading →
Temperature is important in ecology. Rising global temperatures have pushed ecologists and conservationists to better understand how temperature influences species’ risk of extinction under climate change. There’s been an increasing drive to measure temperature at the scale that individual organisms actually experience it. This is made possible by advances in technology.
Enter: the thermal camera. Unlike the tiny dataloggers that revolutionised thermal ecology in the past decade or so, thermal images capture surface temperature, not atmospheric temperature. Surface temperature may be as (if not more) relevant for organisms that are very small or flat, or thermoregulate via direct contact with the surface. Invertebrates and herps are two great examples of these types of organisms – and together make up a huge proportion of terrestrial biodiversity. Also, while dataloggers can achieve impressive temporal extent and resolution, they can’t easily capture temperature variation in space.
Like dataloggers, thermal cameras are becoming increasingly affordable and practical. The FLIR One smartphone attachment, for example, weighs in at 34.5 g and costs around ~US$300. For that, you get 4,800 spatially explicit temperature measurements at the click of a button. But without guidelines and tools, the eager thermal photographer runs the risk of accumulating thousands of images with no idea of what to do with them. So we created the R package ThermStats. This package simplifies the processing of data from FLIR thermal images and facilitates analyses of other gridded temperature data too. Continue reading →
The number of studies published every year in ecology and evolutionary biology has increased rapidly over the past few decades. Each new study contributes more to what we know about a topic, adding nuance and complexity that helps improve our understanding of the natural world. To make sense of this wealth of evidence and get closer to a complete picture of the world, researchers are increasingly turning to systematic review methods as a way to synthesise this information.
What is a Systematic Review?
Systematic reviews, first developed in public health fields, take an experimental design approach to reviewing the literature. They treat the search for primary studies as a transparent and reproducible data gathering process. The rigorous methods used in systematic reviews make them a trusted form of evidence synthesis. Researchers use them to summarise the state of knowledge on a topic and make policy and practice recommendations. Continue reading →
Researchers at Washington State University and Smith-Root recently invented an environmental DNA (eDNA) filter housing that automatically preserves captured eDNA by desiccation. This eliminates the need for filter handling in the field and/or liquid DNA preservatives. The new material is also biodegradable, helping to reduce long-lasting plastic waste associated with eDNA sampling.
This video explains their new innovation in the field of eDNA sampling technology: