Intended for healthcare professionals

Research Methods & Reporting

Guidelines for reporting of health interventions using mobile phones: mobile health (mHealth) evidence reporting and assessment (mERA) checklist

BMJ 2016; 352 doi: https://doi.org/10.1136/bmj.i1174 (Published 17 March 2016) Cite this as: BMJ 2016;352:i1174
  1. Smisha Agarwal, associate faculty1 2 3,
  2. Amnesty E LeFevre, assistant scientist1 2,
  3. Jaime Lee, research assistant1 2,
  4. Kelly L’Engle, assistant professor4 5,
  5. Garrett Mehl, scientist6,
  6. Chaitali Sinha, senior programme officer7,
  7. Alain Labrique, associate professor1 2
  8. for the WHO mHealth Technical Evidence Review Group
  1. 1Johns Hopkins Bloomberg School of Public Health, Department of International Health, Baltimore, MD 21205, USA
  2. 2Johns Hopkins University, Global mHealth Initiative, Baltimore
  3. 3Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, USA
  4. 4Family Health International 360, Durham, NC, USA
  5. 5School of Nursing and Health Professions, University of San Francisco, San Francisco, CA, USA
  6. 6World Health Organization, Department of Reproductive Health and Research, Geneva, Switzerland
  7. 7International Development Research Centre, Ottawa, Canada
  1. Correspondence to: A Labrique alabriq1{at}jhu.edu
  • Accepted 9 February 2016

To improve the completeness of reporting of mobile health (mHealth) interventions, the WHO mHealth Technical Evidence Review Group developed the mHealth evidence reporting and assessment (mERA) checklist. The development process for mERA consisted of convening an expert group to recommend an appropriate approach, convening a global expert review panel for checklist development, and pilot testing the checklist. The guiding principle for the development of these criteria was to identify a minimum set of information needed to define what the mHealth intervention is (content), where it is being implemented (context), and how it was implemented (technical features), to support replication of the intervention. This paper presents the resulting 16 item checklist and a detailed explanation and elaboration for each item, with illustrative reporting examples. Through widespread adoption, we expect that the use of these guidelines will standardise the quality of mHealth evidence reporting, and indirectly improve the quality of mHealth evidence.

Summary points

  • To improve the reporting of mobile health (mHealth) interventions, the WHO mHealth Technical Evidence Review Group developed a checklist on mHealth evidence reporting and assessment (mERA)

  • The checklist aims to identify a minimum set of information needed to define what the mHealth intervention is (content), where it is being implemented (context), and how it was implemented (technical features), to support replication of the intervention

  • Through widespread adoption, these guidelines should standardise the quality of mHealth evidence reporting, and indirectly improve the quality of mHealth evidence

Mobile technologies have the potential to bridge systemic gaps needed to improve access to and use of health services, particularly among underserved populations. mHealth—defined as the use of mobile and wireless technologies for health—aims to capitalise on the rapid uptake of information and communication technologies (ICT) to improve health system efficiency and health outcomes. Over the past decade, global enthusiasm and the interest of development agencies, researchers, and policy makers have led to the rapid proliferation of mHealth solutions throughout developed and developing countries. The World Bank reported that there were more than 500 mHealth projects in 2011 alone.1 Despite the emergence of hundreds of mHealth studies and initiatives, there remains a lack of rigorous, high quality evidence on the efficacy and effectiveness of such interventions.2 3 The current mHealth evidence is disseminated in multiple forms including peer reviewed literature, white papers, reports, presentations, and blogs. The evidence base is heterogenous in quality, completeness, and objectivity of the reporting of mHealth interventions—thus making comparisons across intervention strategies difficult. This has led to a call for a set of standards that can harmonise and improve the quality of future research publications, to facilitate screening of emerging evidence and identification of critical evidence gaps. Such improvements in reporting of evidence can support policy makers in making decisions around mHealth intervention selection.4

The value of standardised guidelines is well accepted and several tools exist to assess the quality and to standardise the reporting of scientific evidence. For example, the grading of recommendations assessment, development, and evaluation (GRADE) approach rates the quality of evidence and the strength of recommendations, and is routinely used by international organisations such as the World Health Organization and Cochrane Collaboration.5 In other fields, the consolidated health economic evaluation reporting standards (CHEERS) statement provides reporting guidance for economic evaluations.6 Other tools have also been developed to standardise the reporting of systematic reviews and meta-analyses (eg, preferred reporting of systematic reviews and meta-analyses (PRISMA)),7 and assess their methodological quality or reliability (eg, assessing methodological quality of systematic reviews (AMSTAR)).8 The consolidated standards for reporting trials (CONSORT) statement provides a 22 item checklist for reporting of randomised controlled trials.9 Other evidence reporting and synthesis approaches exist for meta-analyses of observational studies (eg, meta-analyses and systematic reviews of observational studies (MOOSE)),10 non-randomised designs (eg, transparent reporting of evaluations with non-randomised designs (TREND)),11 and observational studies (strengthening the reporting of observational studies in epidemiology (STROBE)).12 CONSORT-EHEALTH aims to provide guidance on reporting of trials involving web based interventions (eHealth) and mHealth.13

Although CONSORT-EHEALTH is aimed at web based intervention trials, several mHealth interventions, especially in low and middle income countries, do not have an active web based component. Additionally, web based interventions do not necessarily have a mobile component (that is, use of a mobile phone or tablet). Lastly, given that the field of mHealth is still in its early stages, evaluations of such mHealth interventions typically use more descriptive and observational study designs in addition to randomised trials. These existing tools (including CONSORT-EHEALTH) are study design specific and focus on methodological rigour. They do not provide recommendations for the reporting of technical details, feasibility, and sustainability of the intervention strategies, which further adds to the challenge of comparing digital strategies. The template for intervention description and replication (TIDieR) checklist fills this gap by providing a guide to the reporting of interventions.14 However, no reporting guidelines exist that capture the priority descriptors needed to adequately understand and potentially replicate ICT interventions for health.

The quality of reporting on evidence on mHealth interventions has been varied. This is likely attributable to two factors: the multidisciplinary nature of mHealth, which combines different approaches from the fields of healthcare and technology, and the rapid pace of technology development, which often outpaces our ability to generate and disseminate quality evidence. In the technology space, prototypes are usually assessed by proof of concept or demonstration studies with fast turnaround time for modification. These results are generally disseminated rapidly in the grey literature, through white papers, conference papers, presentations, and blogs. By contrast, research and dissemination in global public health moves at a slower pace, beginning with formative research, followed by measuring efficacy, and then effectiveness. Each of these evaluation steps might require considerable resources and take long periods of time to implement and ultimately, publish. The concise nature of peer reviewed literature also limits the reporting of technical details which describe the nature of what the mHealth intervention is; constraining efforts to effectively synthesise research on a particular intervention or technical strategy.

To address this gap, WHO convened a group of global experts working at the intersection of mHealth research and programme implementation, called the mHealth Technical Evidence Review Group (mTERG). mTERG identified the need for a tool that provides guidelines for the reporting of evidence on the effectiveness of mHealth interventions. The group recognised that the evaluation and reporting of mHealth and ICT interventions requires a unique lens, blending a combination of study designs and methods, as well as reporting that incorporates the description of the intervention and the context in which the intervention is implemented. The proposed mHealth Evidence Reporting and Assessment (mERA) checklist resulted from a series of consultations with the mTERG. This paper describes the scope, development process, and components of mERA.

mERA checklist development

The development of mERA followed strategies for the development of reporting guidelines, as outlined by Moher and colleagues.15 The development process comprised three main steps (fig 1): convening an expert working group (WHO commissioned the Johns Hopkins Global mHealth Initiative (JHU-GmI) to develop an approach for the mERA guideline), convening a global expert review panel for checklist development, and pilot testing the checklist.

Figure1

Fig 1 Development process for the mERA checklist

Developing an approach

JHU-GmI is a multidisciplinary consortium of technical and research experts with global experience in developing and researching mHealth interventions. In October 2012, WHO convened a working group of JHU-GmI experts to review existing reporting guidelines, determine their applicability to mHealth evidence, and articulate the relevance of additional guidelines, if appropriate. Based on a detailed review of existing guidelines, the working group recommended that the reporting items in the existing guidelines needed augmenting for relevance and application to the mHealth literature. The JHU-GmI working group recommended that guidelines for mHealth evidence should comprise two key components: a checklist to enable adequate classification and replication of the mHealth intervention being reported; and a checklist to assess the methodological rigour of the study design used to evaluate the intervention, appropriate to the stage of the innovation.

An initial draft of the checklist for reporting on the technical aspects of the mHealth interventions was developed on the basis of a systematic review of mHealth literature. Once drafted, these compiled criteria were vetted through interviews with mHealth research and implementation experts. The guiding principle for the development of these criteria was to identify a minimum set of information critical to defining what the mHealth intervention is (content), where it is being implemented (context), and how it was implemented (technical features), to ensure that a reader would be able to replicate the intervention. Web appendix 1 briefly describes the development of the methodological checklist.

Expert review

In December 2012, WHO convened a three day meeting for mTERG with 18 global mHealth experts in Montreux, Switzerland. Experts consisted of academic researchers, implementation specialists, technologists, government decision makers, and representatives of several WHO departments and research programmes. At this meeting, the background, rationale for the development of the mERA criteria, and a draft of the criteria were presented. The approach was subjected to intensive analysis, comment, and recommendations for improvement. After incorporation of this feedback, a WHO mTERG quality of information (QoI) taskforce was established to finalise the tool for pilot testing. The QoI taskforce comprised five members with technical expertise spanning varied health domains and research perspectives. The taskforce applied the mERA checklist to a sampling of literature, including peer reviewed and grey literature. Assessment scores and feedback from the taskforce were compiled and discussed over several video conference meetings in which members discussed the value of individual items and the definitions distinguishing them. Through these discussions, checklist items were refined and a final list of criteria was agreed on. At the end of this three month review process, taskforce members finalised the list of criteria with explanations and definitions for each criterion, providing sufficient detail to facilitate understanding and application of the tool in a consistent manner.

Pilot testing criteria

After the expert panel review, the mERA checklist was applied to 10 English language reports to test the applicability of each criterion to a range of existing mHealth literature and to assess whether the criteria were understood consistently by a diverse group of users. The documents that were assessed comprised a mix of peer reviewed and grey literature, and included qualitative studies, formative assessments, observational studies, and randomised controlled trials. Six graduate students with training in epidemiology and experience working in mHealth participated in this exercise. Each reviewer was asked to read and apply the criteria to evaluate the reporting quality of the selected documents. The percentage of overall agreement between reviewers and κ statistic were calculated for each criterion. Specific criteria that had less than a 50% inter-rater agreement for three or more papers, or less than 50% inter-rater agreement in one paper and less than 75% agreement for two or more papers. The wording of these criteria were discussed and revised on the basis of feedback from the reviewers and in collaboration with the QoI taskforce. A detailed codebook guideline document was developed for the final list of mERA criteria with relevant examples from the literature

To continue the testing and refinement of these guidelines, in 2014, the WHO Department of Reproductive Health and Research and mTERG supported the application of the mERA tool by independent research groups to three topic areas. These included the application of mERA to conduct evidence reviews on the use of mobile strategies for:

  • • Management of stockouts of essential maternal and child health drugs

  • • Promotion of adherence to treatment regimens by healthcare providers

  • • Promotion of adolescent sexual and reproductive health.

These topics were selected in part to represent mHealth interventions at all levels of health service delivery, including at the health system level, at the provider level, and for behaviour change communication at the client level. The objective of this exercise was to conduct a systematic review of the evidence in these topic areas (drawing from published and non-peer reviewed sources), assess the quality of evidence reporting by applying the mERA guidelines, and to further test and refine the mERA guidelines. The application of mERA to each area resulted in some refinements and adaptations of the criteria. Web appendix 2 presents the results from the first two applications. The last application to adolescent sexual and reproductive health will be submitted for peer review as a separate manuscript. Lastly, two additional criteria were added to the core items to ensure compliance with TIDIEeR checklist, and on the recommendation of journal reviewers.

Scope of the mERA checklist and guide for reporting mobile based health interventions

mERA was developed as a checklist of items which could be applied by authors developing manuscripts that aim to report on the effectiveness of mHealth interventions and by peer reviewers and journal editors reviewing such evidence. mERA aims to provide guidance for complete and transparent reporting on studies evaluating and reporting on the feasibility and effectiveness of mHealth interventions. The checklist does not aim to support the design or implementation of such studies, or to evaluate the quality of the research methods used. Rather, it is intended to improve transparency in reporting, promote a critical assessment of mHealth research evidence, and help improve the rigour of future reporting of research findings.

mERA was developed to reflect the stages of development of mHealth interventions and accompanying research. mHealth interventions typically start at the stage of gathering functional requirements and developing and testing the technology. The accompanying evaluation studies aim to assess the feasibility of the intervention and are often descriptive or observational. After this pilot stage, more robust study designs are used to assess the effect of the intervention. To highlight the importance of reporting results on the assessment of both the technical platform and core intervention, mERA includes technical specification criteria deemed necessary for complete reporting of a mHealth intervention. The maturity of the mHealth intervention, from prototyping (defined by feasibility and acceptability outcomes) to scaled deployment (where effect and implementation fidelity evaluations are paramount), is accommodated in the mERA checklist.

mERA components and use in conjunction with other guidelines

mERA is a checklist consisting of 16 items focused on reporting on mHealth interventions (table 1). In addition to these criteria, web appendix 1 presents 29 items for reporting on study design and methods. As far as possible, the 16 core mERA items should be used in conjunction with appropriate checklists for study design, such as CONSORT for randomised trials and STROBE for observational studies. General methodology criteria presented in web appendix 1 were developed based on the extant checklists, to specifically guide methodological reporting of mHealth evidence, which has largely used exploratory study designs so far. We present this checklist in web appendix 1 as guidance for authors who might be unfamiliar with extant checklists specific to study design. This is to point out important aspects of the research design and implementation that should be reported, at a minimum, to allow research to undergo synthesis and meta-analysis. We however, reiterate here, the importance of following published and accepted global guidelines for the reporting of research, by research design or method.

Table 1

mHealth evidence reporting and assessment (mERA) guidelines, including mHealth essential criteria

View this table:

Explanation and elaboration

Table 1 presents the mERA core items. The rationale for inclusion and explanation of each item is given listed with examples of good reporting.

Item 1—Infrastructure: describe, in detail, the necessary infrastructure which was required to enable the operation of the mHealth programme

Example

“The rapid increase of teledensity, from under 3% in 2002 to 33.5% in 2010, combined with a total adult literacy rate of 75% (2008), allowed this mHealth intervention to reach a large population.”16

Explanation

Have the authors clearly described the necessary infrastructure required to support technology operations in the study location? This refers to physical infrastructure including electricity, access to power, and connectivity in the local context. Reporting rates should ideally correspond to the context in which programme implementation occurred. Where only national level data are available, limitations in data should be noted and the anticipated contextual variations discussed. Reporting of the minimum infrastructure support requirements facilitates improved understanding of the feasibility, generalisability, and replicability of the innovation in other contexts within and across countries. When this information is unreported, it is difficult to ascertain whether an mHealth strategy or specific technology might be transplantable into a different population, where infrastructure might be inferior to the location where the reported programme was conducted. Understanding these are dynamic conditions, the authors should strive to describe the minimum enabling infrastructure required for programme implementation.

Item 2—Technology platform: describe, in sufficient detail to allow replication of the work, the software and hardware combinations used in the programme implementation

Examples

“RapidSMS® is an open source SMS application platform written in Python and Django. The SMS-based project was developed to track the pregnancy lifecycle . . . alerting health facilities, hospital and ambulances.”17

Explanation

Have the authors explained the choices of software and hardware used in the deployment of the described mHealth intervention? Clear communication of the technology used in the programme is critical to allow the contextualisation of the authors’ work among other innovations. Without this information, it is difficult to group projects which have taken identical (or similar) approaches to resolving health system constraints. If the software used is a publicly available system (eg, Open Data Kit, CommCare) it should be explicitly mentioned, together with the modifications or configuration. Links to the code should be provided, if publicly available. If the application or system has been custom coded for the programme and is open source, the link to the public repository where the code is housed would be useful to researchers attempting to replicate the authors’ work. Similarly, the hardware choices made should be described with detail akin to that in item 1. This allows future programme implementers to understand the minimum technical functionality required for the software performance of replicate deployments to be similar in nature to the programme being reported. For example, details on modifications such as whether the devices were functionally locked down to limit use of non-study applications should be reported.

Item 3—Interoperability: describe how, if at all, the mHealth strategy connects to and interacts with national or regional Health Information Systems (HIS)/programme context

Example

“Text messages were sent using a customized text-messaging platform integrated with the institution's immunization information system.”18

Explanation

Clarity of the fit within the existing HIS, either national or of the host organisation, is important to understanding how the mHealth strategy adds to the existing workflows, improves on existing processes, or complements existing programmes. Many mHealth projects have been criticised for existing in a silos, independent of existing efforts to create organisational or national HIS architectures or to integrate with existing health promotion programmes.19 Simple descriptions of specific data standards being used (eg, HL7, OpenMRS CIEL (Columbia International eHealth Laboratory) concept dictionary, ICD-9/10 (international classification of diseases, 9th and 10th revisions)), can provide some basis to gauge a programme’s interoperability readiness. These descriptions can also help to understand whether the activity is a limited scale pilot project, or a strategy being built for national scale-up. The degree to which a programme might already be integrated into a national or organisational system may also be reported, explaining how data elements contribute to aggregate reporting through systems such as District Health Information Systems (DHIS).

Item 4—Intervention delivery: elaborate the mode, frequency, and intensity of the mHealth intervention

Example

“Parents of children and adolescents in the intervention group received a series of 5 weekly, automated text message influenza vaccine reminders.”20

Explanation

Often, in reporting the mHealth innovation, authors omit important details around the specific exposure that participants undergo. Firstly, the channels used to provide information or engage with the client should be described (eg, SMS, voice message, USSD (unstructured supplementary service data)) because this choice may explain operational variability across similar deployments. Parameters such as the intensity and frequency of interactions, duration of engagement, and time of day (if relevant) should be described. For example, with a text message intervention to stimulate behaviour change, how was the message curriculum structured, timed, and delivered? Was attention paid to the time of day? Were there limits placed on the number of messages sent in a given week, with concerns about information saturation? Were choices between modes of delivery offered to clients (eg, interactive voice response instead of text messages)? For what total duration were the messages sent?

Item 5—Intervention content: describe how the content was developed/identified and customised

Example

“Best practices for health communication programs were used to systematically develop the family planning text messages which are largely based on the WHO Family Planning Handbook. The m4RH system is provided in the language Swahili and offers information about side effects, method effectiveness, duration of use and ability to return to fertility.”21

Explanation

We recommend that the source of any informational content (eg, behaviour recommendations, decision support guidelines, drug or referral recommendations, global or national technical guidelines) be mentioned clearly, together with any specific adaptation that may have been done to localise the content for the particular project. If new content was created, the process of enlisting qualified experts and the development, validation, and testing of novel content should be described. If information content is drawn from a publicly available resource, or newly developed content is being made publicly available, external links to this database should be provided.

Item 6—Usability testing: describe how the end-users of the system engaged in the development of the intervention

Example

“Designing the system began with formative research with overweight men and women to solicit feedback about dietary behaviours, current mobile phone and text and picture message habits, the type and frequency of text and picture messages helpful for weight loss, and nutrition-related topic areas that should be included in a weight loss program.”22

Explanation

Given the limitations in space in most peer reviewed journals, this important element of a carefully developed mHealth innovation is given short shrift. Often, separate manuscripts or documents can exist describing the formative research undertaken to capture user needs, define system constraints, map user workflows, and adapt communication content and the technical solutions to meet the local context. If this is the case, clear reference to where such detail can be found is useful to many readers attempting to either contextualise or replicate the work. The definition and recruitment of end-users should be clearly explained, together with a brief overview of the depth and breadth of formative work undertaken to engage end-users in the development of the system. Conversely, if end-users were not involved, this, too, should be explicitly mentioned.

Item 7—User feedback: describe user feedback about the intervention or user satisfaction with the intervention

Example

“Most telephone respondents reported that the platform was easy to use and simple, and appreciated the ability to obtain health information via mobile phone.”23

Explanation

Has user response to the mHealth programme been assessed, and acceptance verified? This information is key for documenting the likelihood of adoption of the intervention among end-users. Despite the importance of end-user feedback in informing mHealth programme design and influencing success, mHealth interventions are sometimes developed without sufficient audience or end-user feedback. User feedback could include user opinions about the content or user interface; or perceptions about usability, access, connectivity, or other elements of the mHealth programme. User feedback should inform the reader’s understanding of how and why the mHealth programme is expected to succeed, as well as challenges that may be encountered in programme implementation and replication.

Item 8—Access of individual participants: mention barriers or facilitators to the adoption of the intervention among study participants

Example

“It is possible that this intervention is less effective among certain subpopulations that may be considered harder to reach (i.e., males, those with a lower level of education and those who do not regularly attend health services)”24

Explanation

Have the authors considered who the mHealth programme will work for and who will be challenged to access it? With this in mind, some population subgroups might be more or less likely to adopt the mHealth tool. As with all modes of delivering health interventions, limitations of access among certain subgroups is likely and therefore should be candidly considered in the peer reviewed report. Challenges to access could relate to socioeconomic status, geographical location, education and literacy, gender norms that limit access to resources and information, as well as other demographic and sociocultural factors. Discussion of potential limitations in access will help the reader to make an informed assessment of whether the mHealth programme is appropriate for other target groups.

Item 9—Cost assessment: present basic costs of the mHealth intervention

Example

“Health workers in Salima and Nkhotakota with no access to the SMS program tend to spend an average of 1,445 minutes (24 hours) to report and receive feedback on issues raised to their supervisor at an average cost of USD $2.70 (K405.16) per contact, and an average contact frequency of 4 times per month.”25

Explanation

Economic evaluations provide critical evidence on the value for money of a particular mHealth solution and entail the comparison of costs and consequences for two or more alternatives. Examples of these include cost effectiveness, cost utility, cost consequence, cost benefit, or cost minimisation analyses. If an economic evaluation has been conducted, it should be reported according to the 24 item CHEERS statement.6 For evaluations of a single programme that do not have a comparator and for which economic evaluations are not possible, we propose reporting basic information on financial costs required to design or develop, start up, and sustain implementation, from the perspective of different users of the system over a clearly specified time period. Ideally, these perspectives would include programme, health systems, mobile network operator, and end-user costs. Methods for estimating resources and costs should be clearly defined, along with currency, price date, and conversion.6

Item 10—Adoption inputs/programme entry: describe how people are informed about the programme or steps taken to support adoption

Example

“Training on how to use the cell phones and on text-messaging protocol took place in 2 2-hour sessions on consecutive days. The first day involved training on how to use the cell phone—using pictographic instructions and interactive exercises—which was conducted in small groups (3-6 participants) and facilitated by a bilingual (English and Twi) proctor.”26

Explanation

Appropriate training, instructional materials, and competency assessment may be warranted because mHealth interventions typically require the health provider or client end-users to have a level of understanding of the scenarios of use and the competence to be able to appropriately use the intervention. Have the authors provided a description of the instructional approaches deployed for end-users of the mHealth intervention, or justification for their exclusion? Authors should ensure that the details of these inputs are described. For health workers, these factors include validity of instruction approach used, competency of instructors, validation of instructional materials, numbers of participants per session, number and length of instruction, use of user guides and competency assessment tools. For clients, these factors include details on how clients are informed about the programme and any promotional approaches used, instructional user guide materials or training, length and periodicity of training, and competency assessment tools used. If instructional materials are available publicly, details should be provided for access.

Item 11—Limitations for delivery at scale: present expected challenges for scaling up the intervention

Example

“Despite our findings that the intervention was not burdensome and was indeed well accepted by health workers, sending 2 messages daily for 5 days a week over 26 weeks to each health worker leaves limited space for other similar, non-malaria quality improvement interventions.”27

Explanation

In view of the challenges in translating findings from pilot studies to large scale implementations, authors should describe any limiting factors surrounding delivery at scale. Oftentimes, pilot studies can maintain the fidelity of implementation and closely monitor activities at a level that might not be sustained at scale. Have the authors discussed the level of effort involved in the implementation by different parties and considerations the constraints for further scaling the intervention? This information is critical for understanding the generalisability of the implementation and making inferences on its viability beyond a closely controlled and defined setting.

Item 12—Contextual adaptability: describe appropriateness of the intervention to the context, and any possible adaptations

Example

“Our mobile phone based survey apparatus may be particularly suited for conducting survey research in rural areas. In surveys where multiple research sites may be remote and dispersed, and where vehicles have to be used to travel from site to site to download data onto laptops, the mobile phone based data collection system may be a significantly cheaper option.”28

Explanation

The mHealth intervention might have functionality that broadly applies to a range of settings and usage scenarios, and might have specific functionality that is only suited to specific needs, users, and geographical localities. Have the authors provided details of the relevance of the functionality of the mHealth intervention to the specific research context, and drawn inferences of potential relevance and adaptability based on health domains, user types, geographical contexts, health needs? Have the authors described the steps necessary to adapt the mHealth intervention to other use cases? In some cases, if a piece of software is hard coded, adaptability could be limited, costly, or time consuming. Specifying limitations to the contextual adaptability of the system being reported helps to clarify whether the system being tested can be considered a potential platform useful for multiple future purposes, or whether the system was designed specifically as a single use, proof of concept.

Item 13—Replicability: present adequate technical and content detail to support replicability

Example

“The mobile phone application, CommCare, developed by Dimagi, Inc., was iteratively modified into Mobilize (Figure 1 - Screen shot images of Mobilize on the mobile phones).”29

Explanation

The potential for an mHealth intervention to be efficiently introduced to a new population is enhanced by the development and availability of standard operating procedures of successful interventions. Have the authors provided details of the development of replicable processes that are being deployed in a consistent manner? These may include the software source code, workflow or dashboards screenshots, flowcharts of algorithms, or examples of content that is developed for the end-users. If this level of detail cannot be included in the manuscript owing to space restrictions, links to external resources should be provided.

Item 14—Data security: describe security and confidentiality protocols

Example

“All survey data were encrypted, thus maintaining the confidentiality of responses. Communication between the browser and the server was encrypted using 128-bit SSL. System servers were secured by firewalls to prevent unauthorized access and denial of service attacks, while data was protected from virus threats using NOD32 anti-virus technology.”30

Explanation

A brief explanation of the hardware, software, and procedural steps taken to minimise the risk of data loss or data capture should be reported. Many ethical review bodies are now requiring investigators to report the details of steps taken to secure personally identifiable information, from identity fields to laboratory test results. Even in settings where laws, standards, or practices governing data security might be absent, researchers and programme implementers are responsible to take reasonable measures to protect the privacy and confidentiality of participant identity and health information. Data security reporting should cover measures taken at the collection or capture of information, transmission of information, through to control measures at receipt, storage, and access. Data sharing protocols, if any, should be mentioned in this section.

Item 15—Compliance with national guidelines or regulatory statutes

Example

“The research assistant programmed the message into the automated, web-based, and HIPAA compliant Intelecare platform.”31

Explanation

If the mHealth intervention or application is being used to deliver health information, provide decision support guidance, or provide diagnostic support to health workers, the authors should describe whether national guidelines or other authoritative sources of information have been used to populate system content. For example, if the system is providing SMS based advice to pregnant women, does the information follow evidence-informed practices and align with recommendations of existing national or regulatory bodies? In some jurisdictions, the provision of healthcare advice or treatment guidelines falls under specific oversight of a national agency such as the United States Federal Communications Commission or Food and Drug Administration. This is especially true when the technology can be considered a medical device. If this determination has been made, and if specific regulatory oversight has been sought, this should be reported.

Item 16—Fidelity of the intervention

Example

“On average, users transferred data manually (pressed the button) 0.9 times a day, where the most eager user transferred data 3.6 times a day and the least eager none. Six of the 12 users experienced malfunctions with the step counter during the test period—usually a lack of battery capacity or an internal “hang-up” in the device that needed a hard restart.”32

Explanation

To what extent has the mHealth programme’s adherence to the intended, original deployment plan been assessed? If systems have been put in place to monitor system stability, ensure delivery (and possibly receipt) of messages, or measure levels of participant or end-user engagement with the system, these can generate metrics of intervention fidelity. Gaps in fidelity assessment and reporting make it difficult to link intervention delivery to possible process or health outcomes. Fidelity metrics could be based on either system generated data, monitoring data, or a combination of both.

Discussion

The mERA checklist was borne from the recognition of a lack of adequate, systematic, and useful reporting of mHealth interventions and associated research studies. The tool was developed to promote clarity and completeness in reporting of research involving the use of mobile tools in healthcare, irrespective of the format or channel of such reporting. Currently, many mHealth studies are descriptive, with a growing number assuming more rigorous experimental designs. The mERA checklist aims to be agnostic to study design, and applied in conjunction with the existing tools that support transparent reporting of the study designs used. Adoption of the mERA checklist by journal editors and authors in a standardised manner is anticipated to improve the transparency and rigour in reporting, while highlighting issues of bias and generalisability, and ultimately temper criticisms of overenthusiastic reporting in mHealth.

The mERA checklist was developed by a group of experts assembled as part of the WHO mTERG, reflecting a diversity of geographical, gender, and domain expertise. Contributors outside of mTERG were recruited through professional and academic networks; their representation could have been biased towards experts focused on public health interventions in low and middle income country programmes. Members of the development team leveraged their own experiences in working in mHealth to identify important domains and criteria that are inconsistently reported in the extant literature. The criteria presented here have been repeatedly applied to various types of evidence to determine how well they pertain to different study designs and reporting formats.

The group’s pragmatic and iterative process in developing this checklist attempted to capture scientific consensus around appropriate mHealth reporting. The intensity of the feedback and testing cycles that this tool went through has led to a set of criteria that is now fairly repeatable in its application and serves to identify high quality content for aggregation and synthesis. Adhering to the mERA checklist might add to the word count of the manuscript. Given the word limitations on a manuscript, inclusion of all the details of the mHealth intervention might not be possible. Therefore, the mERA checklist encourages authors to refer the reader to an external link or resource where such intervention details are available.

This checklist represents an ambitious effort to standardise reporting of mHealth evidence. The core 16 item checklist aims to fill a substantial gap in the existing mHealth evidence space, where poor reporting of the mobile interventions has resulted in limited replication of effective interventions. We expect that the mERA checklist represents a set of evolving criteria that will be appraised and if necessary, updated. The checklist will be disseminated through conducting workshops and presentations at the mHealth Summit, mHealth Working Group, and other global informatics forums. Additionally, the checklist will be hosted on the WHO mTERG website, and the Equator website. The mERA checklist will be continuously revised and versions will be periodically released on the basis of feedback, comments, and experiences from its use. We invite readers to share their comments and experiences.

Conclusion

The mERA tool aims to assist authors in reporting mHealth-research, to guide reviewers and policymakers in synthesising high-quality evidence, and to guide journal editors in critically assessing the transparency and completeness in reporting of mHealth studies. Like similar checklists, mERA does not function to evaluate the quality of the research itself, but rather the reporting quality of the research and the mHealth intervention. Through widespread discussion, refinements, and adoption, we expect that the use of this checklist will indirectly improve the quality of mHealth evidence in the literature. An increase in transparent and rigorous reporting can reveal gaps in the conduct of research, and aid in our efforts to synthesise findings. This, in turn, will improve the understanding and science of how to use and understand the effects of mHealth as a field of inquiry.

Footnotes

  • We thank the following members of the mTERG QoI taskforce group for their contributions: Nandini Oomann, Caroline Free; the following members of the JHU-GmI mERA taskforce: Larissa Jennings, Marguerite Lucea, James BonTempo, and Christian Coles; Michelle Carras, Jill Murray, Tara White, Cesar Augusto, Shreya Pereira, Sean Galagan, Emily Mangone, and Angela Parsecepe for applying mERA to peer reviewed articles and grey literature and for providing valuable feedback; and the reviewers of this manuscript for their insightful feedback.

  • Contributors: SA, AEL, and JL led the development of an initial approach and draft of the checklist. AL and GM led the organising of the consensus meeting held in December 2012. All coauthors participated in this meeting or follow-up meetings to finalise the checklist items. KL, SA, and AL led teams to test the checklist items. All authors contributed to the drafting and revision of the paper, and reviewed and approved the final version. Members of the mTERG mERA taskforce who contributed to this manuscript include: Lavanya Vasudevan (Duke University Global Health Institute, USA), Tigest Tamrat (WHO Department of Reproductive Health and Research, Switzerland), Karin Kallander (Malaria Consortium Africa, Uganda), Marc Mitchell (D-Tree International, USA), Muna Abdel Aziz (Sudanese Public Health Network, UK), Frederik Froen (Norwegian Institute of Public Health, Norway), Hermen Ormel (Royal Tropical Institute (KIT), Netherlands), Maria Muniz (UNICEF, USA), Ime Asangansi (UN Foundation, Nigeria).

  • Funding: Support and funding for the development of the mERA checklist was received from the WHO Department of Reproductive Health and Research.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support from the WHO Department of Reproductive Health and Research for the submitted work; SA, AEL, and AL received a grant from GSMA-Mobile for Development to provide evaluation support to their 10 country mobile based nutrition programme, and from WHO to provide evaluation support to their grantees working on mobile based health interventions; KL was contracted by WHO to apply the mERA tool to mobile phone interventions for sexual and reproductive health; GM received financial support from the Norwegian Agency for Development Cooperation and the WHO Department of Reproductive Health and Research fund for hosting mobile health related meetings; MM has a pending patent for a mobile, point of care monitor for patients’ health conditions with a decision support system; the remaining authors declare no competing interests.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References

View Abstract