Who to Trust? The IDEA Protocol for Structured Expert Elicitation

Post provided by Victoria Hemming and Mark Burgman

Expert judgement is used to predict current and future trends for Koala populations across Australia
Expert judgement is used to predict current and future trends for Koala populations across Australia

New technologies provide ecologists with unprecedented means for informing predictions and decisions under uncertainty. From drones and apps that capture data faster and cheaper than ever before, to new methods for modelling, mapping and sharing data.

But what do you do when you don’t have data (or the data you have is incomplete or uninformative), but decisions need to be made?

In ecology, decisions often need to be made with imperfect or incomplete data. In these circumstances, expert judgement is relied upon routinely. Some examples include threatened species listing decisions, weighing up the cost and benefit of management actions, and environmental impact assessments.

We use experts to answer questions such as:

These are questions about facts in the form of quantities and probabilities for which we simply can’t collect the data.

While ecology is a complex beast, we’re not alone in this predicament. Across most fields expert judgement is used extensively, from predicting nuclear risks, climate change, and volcanic eruptions, to forecasting the results of impending geopolitical events.

Expert judgement can be remarkably good, but it’s not infallible. There are many examples of expert judgement that has been deplorably bad. These judgements may then be used to inform quite serious, often irreversible, decisions.

If we’re going to rely on expert judgement, it’s important that we make sure those judgements provide the best possible data to inform our decisions. This means not only making sure that they are good (i.e. accurate, well-calibrated and/or informative), but that they provide the necessary information required to inform decisions (e.g. they include uncertainty) and that the methods used to derive the judgements are transparent, repeatable and open to critical appraisal (i.e. scientific). These principles are expected for empirical data, yet they’re seldom considered when asking for opinions from experts.

Fortunately, there are ways to improve expert judgements.

Why Experts Make Mistakes

The introduction of the Cane toad to Australia shows the consequences of poor expert judgements.
The introduction of the Cane toad to Australia shows the consequences of poor expert judgements.

To improve expert judgements, it’s useful to understand why experts might make mistakes (Hemming et al. 2018). Forming judgements under uncertainty is incredibly difficult. It requires not only knowledge of a field, but the ability to adapt and communicate that knowledge, usually as numbers or probabilities.

When making judgements under uncertainty, experts (like of all us) tend to use a suite of heuristics (think of these as mental shortcuts or gut feelings) to help them. They may anchor on the information provided (anchoring bias), source information that confirms their own beliefs (confirmation biases), or be influenced by the group consensus (groupthink). Also, experts may not be used to communicating their knowledge in numbers and probabilities. These factors can contribute to overconfident or biased judgements, all of which can lead to bad decisions.

In many instances when we need expert judgement we aim to avoid bias by searching for the experts we believe have the best credentials. Experts are often selected based things like on their years of experience, peer-recommendation, and self-rating. Dogmatically applying these criteria can lead to exclusion of potentially knowledgeable candidates, and the reliance on a single, seemingly well-qualified, expert. But research shows there’s no correlation between these metrics and performance on relevant, uncertain questions. Reliance on a single expert is a terrible idea.

Deriving Good Judgements

Many biases of judgement can be made to virtually ‘disappear’ (or are at least reduced) if care is taken in how these judgements are elicited (Hemming et al. In review).

Expert judgement can help discover the density of crown of thorns starfish. ©Jon Hanson
Expert judgement can help discover the density of crown of thorns starfish. ©Jon Hanson

While we may not be able to select the best expert a priori, judgements from groups comprised of a diverse array of individuals, elicited independently and then aggregated are an excellent option. Usually they perform better than the median individual and sometimes better than the best performing individual (this is often termed ’the wisdom of the crowd’). Providing an opportunity for feedback and discussion can help to reveal linguistic misunderstandings or new evidence which hadn’t been considered by the group. Asking experts to think about possible reasons why their initial beliefs might be wrong encourages them to source information which they may have overlooked and can reduce overconfidence. Paying attention to the wording of questions can help experts better express their judgements as numbers and probabilities. Weighting experts based on their performance on related questions can further improve the final judgements.

In other words, the way we structure our elicitation process (i.e. how we structure questions and get answers) plays a huge role in how good the final judgements will be.

Structured Elicitation Protocols

Research demonstrating the improvements that can be made by applying sound elicitation methods has led to the development of ‘structured elicitation protocols’. These protocols treat each step of the elicitation as a formal process of data acquisition, drawing on research to improve the final expert judgements. Importantly they also aim to provide the same level transparency, repeatability and defensibility to the resulting judgements that is expected of empirical data.

The need to apply these protocols have been discussed (here, here, and here), and are being widely adopted across many domains. Their application in conservation and ecology has been limited though. Possible reasons for this could include the practical and financial constraints that make the application of elaborate protocols difficult. For example, in many conservation problems funding would rarely be available to undertake face-to-face elicitations or hire a trained facilitator to run the elicitation.

The IDEA Protocol

To encourage a more rigorous use of experts in ecology and conservation we recently outlined steps to prepare for and implement a structured elicitation protocol termed ‘The IDEA protocol’ (“Investigate”, “Discuss”, “Estimate” and “Aggregate”), previously outlined here and here. The IDEA protocol distills the most valuable steps from existing structured protocols, and combines them into a single and practical protocol:

  1. Recruit a diverse group of experts to answer questions with probabilistic or quantitative responses.
  2. Experts first Investigate the questions and clarify their meanings, then privately provide their individual best estimates and associated credible intervals, most often using a 3-step or 4-step question format.
  3. Experts get feedback on their estimates in relation to other experts.
  4. With the help of a facilitator, the experts Discuss the results, resolve different interpretations of the questions, cross-examine reasoning and evidence, and then provide a second and final private
  5. The individual estimates are then combined using mathematical
The IDEA protocol an effective method for eliciting experts (figure from Hemming et al. In review*).
The IDEA protocol an effective method for eliciting experts (figure from Hemming et al. In review*).

These steps have been tested with experts answering questions in their fields. They have been shown to improve the accuracy of the best estimate and the calibration of interval judgements (i.e. how often the intervals provided by experts will contain the realized truth) (Hemming et al. In review)

There are many other advantages of this protocol. It can accommodate remote elicitation (i.e. elicitation over email), making it accessible on a modest budget. The inclusion of the 3-step and 4-step question format, helps to get quantitative judgements from experts who try to avoid quantification. Documenting the reasoning and discussion accompanying quantitative judgements provides justification, resolves residual linguistic ambiguity. It can also help to refine models and target the collection of data to reduce uncertainty.

From its humble beginnings to improve biosecurity decision making for the Australian Government, the protocol has now gained wide attention. It’s been used for research into intelligence gathering for the United States Intelligence Advanced Research Projects Activity (IARPA), assessments of ecosystems for the International Union for Conservation Network, hydrological modelling for Australian river management, and procurement and maintenance decisions made by the Australian Department of Defence. Recently guidelines for its application were published alongside world leading methods and current research in a book about Elicitation. The IDEA protocol is at the forefront of developments in this field.

Preparing for an Elicitation

The IDEA protocol is a straight forward process, but, implementing a structured elicitation can be daunting. The guidelines we developed in ‘A practical guide to structured expert elicitation using the IDEA protocol’ outline many of the steps that need to be undertaken before an elicitation. This is followed by step-by-step advice on how to implement the protocol. To get you on your way to eliciting better judgements we have also put extensive elicitation materials and templates online. We’ve also published some RMarkdown code which will help you to automate data collation, cleaning, and generate summary and feedback reports.

To find out more about the IDEA protocol, read our Methods in Ecology and Evolution article ‘A practical guide to structured expert elicitation using the IDEA protocol

Thanks to Dr Anca Hanea, Dr Marissa Mc Bride, Dr Bonnie Wintle, A/Prof. Fiona Fidler, Dr Terry Walshe and researchers from the Centre of Excellence for Biosecurity Risk Analysis and the Centre for Environmental and Economic Research who helped to develop, test and co-author the guidelines for the IDEA protocol, from which this blog has extensively borrowed.

One thought on “Who to Trust? The IDEA Protocol for Structured Expert Elicitation

Leave a comment