COVID-19 epidemiological models are ubiquitous in these times. Since the beginning of the COVID-19 outbreak, decision-makers across the world have been using them to understand the transmission mechanism of the disease, predict the course of the epidemic, introduce mitigation and suppression strategies, plan healthcare resources to meet the emerging healthcare demand, lift restrictions placed to mitigate or suppress the virus and more recently, to set priorities for treatment and vaccine investments. “In the history of humanity, perhaps no data models have been more recognisable than COVID-19’s infection and death curves”, declared the authors of an article published by the World Economic Forum (WEF).
There are over 90 quantitative COVID-19 models listed in the database maintained by the Society for Medical Decision Making (SMDM). Although many models have been built with similar objectives, each model makes different assumptions about the properties of the novel coronavirus, the contact patterns and human behavior, etc. Further, the modellers developing these models use different types of mathematical approaches to make their projections and use different data sources as inputs in their models. As a result, these models do not necessarily arrive at the same conclusions, leaving end-users of the results in a quandary about how to interpret and use them in policymaking. These challenges are pertinent particularly when countries need to respond quickly to the epidemic and contain public panic. This rush to make speedy decisions led to a situation where many models were relied on without a thorough assessment or were used to answer questions that were well beyond their originally intended purpose. Notwithstanding the limitations of COVID-19 models, most of the countries affected by the first wave of the virus have used these for early policy response and, regardless of their experience, policymakers will need to continue to use them to weigh trade-offs in the economy and epidemiology as well as assess, not only treatment but also vaccines, diagnostics and other related health technologies that will be available in the future. Given the nature of the disease, which has already affected close to 150 million people and taken three million lives, we cannot have experimental trials to roll-out evaluations and will need to continue to make evidence informed decisions based on COVID-19 models.
The COVID-19 Multi-model Comparison Collaboration
To this end, a group of organisations representing different constituencies namely, the World Health Organisation (WHO), the World Bank Group, the Bill and Melinda Gates Foundation (BMGF), and the International Decision Support Initiative (iDSI), came together as the COVID-19 Multi-model Comparison Collaboration (CMCC) to review and develop guidance on the use of COVID-19 models to inform policymaking in the coming times. The CMCC initiative is unique because it convened academics, modellers of seven widely used COVID-19 models that have been used to assist policy responses in low- and middle-income countries (LMICs), policymakers who are the target users as well as international partners and funders who are currently funding these types of activities in LMICs. The CMCC comprised: a Technical Group of independent academics to assess and provide advice on the fitness-of-purpose of models; a Policy Group of decision makers from LMICs representing Asia, Africa, the Middle East and Latin America to develop guidance for use of COVID-19 models for policy response; a Modeller’s Group comprising modelling teams that agreed to engage with the Technical and Policy Groups to increase understanding of their models; lastly, the Partner and Management Groups, that supported the use of COVID-19 models in LMICs through financing or technical assistance programmes. Details of the various groups involved in the CMCC can be found at the following website: https://decidehealth.world/CMCC .
CMCC’s Technical and Policy Reports
This initiative has generated several interesting outputs that we believe will be instrumental in shaping decision-making based on models in the COVID-19 era. The Technical Group produced tables comparing similarities and differences of the seven models, informing areas for development in terms of harmonisation of key parameters. These results may be used to inform regional, national and sub-national efforts to improve availability of local data and support modelling activities for better predictions (see report). Further, this analysis summarises the key features of models, what they can and cannot do to inform the policy response. It also suggests that modelling cannot replace surveillance, epidemiological and intervention studies; in fact, models need these studies in order to derive good quality data.
The Policy Group, in its report, proposes a Reporting Standards Trajectory that outlines the commitments of policymakers, funders and modellers for generating and reporting policy-relevant evidence as well as modelling materials for public scrutiny over the course of a pandemic. Typically, funders work with modellers, who are required to deliver outputs defined in contracts, but rarely play a role beyond finding good modelling teams to support. Policymakers, who may rely on models for formulating their policy response, need to work within a short timeframe but don’t have the technical know-how to ensure their quality. Modellers, who work with both funders and policymakers, are always under the pressure of producing models within short timelines, operating within the constraints of limited data and resources. The Reporting Standards Trajectory seeks to find a common ground that is agreeable to the three parties by delineating a minimum, acceptable and ideal level of information as well as the timeline in which these outputs become available, a package that policy makers can expect to receive from modellers and funders also recognise. Additionally, an important determinant of the usefulness of models is the ability to contextualise them to the local setting and the Policy Group report calls for collaboration among stakeholders throughout the policy development and implementation process to ensure accountability in their use.
With the CMCC, we have sought to be pragmatic and believe that this initiative breaks new ground. It sets a new standard for model comparisons to occur at the intersection of modeling science and policy making. This is significant compared to previous efforts on model comparisons including those for other diseases such as HPV (Den Boon et al., 2019; Brisson et al., 2020). These initiatives mainly involved modellers and academics to provide technical advice on model use. The CMCC convened, not only modellers and academics, but also development partners and policymakers and this process has yielded several insights. Notably, the CMCC initiative has made us rethink the notion of what a successful model looks like: should it be one that makes a precise prediction? Probably not, especially when predicting an epidemic, one shouldn’t have to wait to see the model predictions unfold. A good model, as this experience has shown us, should instill trust among users who will do everything in their power to prevent the dire consequences that the model may have foretold. Indeed, the CMCC found that there is no single “best” COVID-19 model and users of models need to rely on a range of models or other types of evidence depending on the policy question they are trying to answer. Further, better information for model inputs is being made available daily, requiring some models or their results to be updated regularly in order to be relevant. To engender a level of trust in the evidence used, it is imperative that the decision-making process is collaborative and involves the various stakeholders.
The way forward
Infectious disease management and pandemic preparedness will remain a policy priority for governments around the world, and the use of models is only likely to grow in the coming years. As new therapeutics and vaccines become available, modelling techniques can be applied to assess the value of investing in them. In this context, researchers working on epidemiological models and Health Technology Assessment (HTA) can utilise the results of the CMCC to work with stakeholders and address the policy needs of countries during a pandemic. This experience has changed the minds of those of us who have worked tirelessly on this topic, and we believe the results of the CMCC will be vital in informing the path ahead.
We are grateful to the COVID-19 Multi-model Comparison Collaboration (CMCC) Secretariat teams (listed alphabetically): Prof. Marc Brisson (Laval University, Canada), Dr. Y-Ling Chi (CGD Europe, UK), Ms. Nejma Chiekh (World Bank Group), Dr. Hannah Clapham (National University of Singapore, Singapore), Ms. Silu Feng (World Bank Group), Dr. Mohamed Gad (Imperial College London, UK), Dr. Adrian Gheorghe (Imperial College London, UK), Mr. Joseph Kazibwe (Imperial College London, UK), Dr. Itamar Megiddo (Strathclyde University, UK), Mr. Francis Ruiz (Imperial College London, UK), Katelijn Vandemaele (WHO) and Dr. Suwit Wibulpolprasert (Ministry of Public Health, Thailand). The CMCC was established by a group of partners coming together: Bill and Melinda Gates Foundation (BMGF), Data 4 SDGs partnership, Department for International Development (DFID), UK, the International Decision Support Initiative (iDSI), Norwegian Agency for Development Cooperation (NORAD), Ministry of Higher Education, Science, Research and Innovation (MHESI), Royal Thai Government, the World Bank (WB), the World Health Organization (WHO), the US Centers for Disease Control and Prevention (US CDC), and United States Agency for International Development (USAID).
The findings, interpretations and conclusions expressed in this article do not necessarily reflect the views of the organisations to which the authors are affiliated or the partner organisations of the CMCC.
Each organisation in the CMCC used its own funding sources for this work. The authors have no conflicts of interest to declare.