Article Text

Download PDFPDF

A primer on PDSA: executing plan–do–study–act cycles in practice, not just in name
  1. Jerome A Leis1,2,3,
  2. Kaveh G Shojania1,3
  1. 1Centre for Quality Improvement and Patient Safety, University of Toronto, Toronto, Ontario, Canada
  2. 2Divsion of Infectious Diseases, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
  3. 3Department of Medicine, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr Jerome A Leis, Sunnybrook Health Sciences Centre, H463, 2075 Bayview Avenue, Toronto, Ontario, Canada M4N 3M5; jerome.leis{at}sunnybrook.ca

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Plan-do–study–act (PDSA) cycles are the building blocks of iterative healthcare improvement.1 Although frequently regarded as separate from research,2 this quality improvement method remains rooted in the scientific method. The P in PDSA usually stands for ‘plan’ but could just as easily refer to ‘predict’. Each cycle combines prediction with a test of change (in effect, hypothesis testing), analysis and a conclusion regarding the best step forward—usually a prediction of what to do for the next PDSA cycle.3

Too often, however, improvement teams go through the motions of PDSA cycles without really embracing its spirit or applying its scientific method. For example, an improvement team might talk about having used PDSA when in reality the original change idea remained roughly unchanged throughout the project, with no refinements to the intervention or the plan to implement it. Quality improvement rarely works out so smoothly. Even among published studies, which presumably include better than average projects, the application of PDSA falls short, with less than half of studies meeting minimum characteristics of PDSA.4 Sometimes PDSA seems more like a quality improvement catch phrase than it does a recognisable scientific process.

In this paper, we review a recent improvement project5 to draw examples of real-world application of PDSA. This project was not chosen to place it on a pedestal in terms of the improvements achieved but rather to demonstrate PDSA methodology and highlight the benefits of putting it into practice.

Illustrative example: project to reduce unnecessary urinary catheters among patients on general medical wards

Urinary catheter overuse contributes to unnecessary patient harms including local trauma, decreased mobility, delirium and infection.6 As in many institutions, the practice at our tertiary care hospital in Toronto had been to leave decisions about insertion and removal of urinary catheters to the discretion of individual physicians without any systematic process to reassess them. Clinicians and infection control experts had the impression that urinary catheters often remained in place for excessive durations on the ward, but no one had formally documented this problem.

Table 1 summarises eight PDSA cycles of this project including the prediction, testing and key lessons learned. Although the literature reports a number of effective interventions to prompt reassessment of urinary catheters, we did not know which would work at our institution.7 The first two PDSAs focused on confirming the burden of unnecessary catheter use at our institution and understanding its causes. We found that many unnecessary catheters were being inserted in the emergency department (ED) resulting in lack of awareness on the ward about the ongoing indication and therefore devoted PDSA cycle 3 to testing a change that involved adding an item to an existing nursing ‘transfer of accountability form’. This form facilitated the handover from nurses in the ED to nurses on the ward by including prompts to discuss patient issues like diet, pending orders and it seemed promising to add an item about the presence of urinary catheter. In developing this intervention, we quickly learned that, as with most handover tools, the form was used to support the dialogue between the transferring and receiving nurse but was not intended as a chart copy in the medical record. This meant that we would be unable to measure the degree to which nurses discussed catheters during handover and it would be difficult to know whether our urinary catheter reassessment prompt had even been implemented.

Table 1

PDSA cycles in the design and implementation of an intervention to reduce unnecessary urinary catheters on general medical wards*

We also learned during PDSA cycle 3 that some nurses in the ED felt that a catheter intervention would increase workload. We reasoned that we might have more control over the decision to insert catheters among admitted patients and specifically devoted PDSA cycle 4 to test the hypothesis that our admission order sets were promoting unnecessary urinary catheter insertions. We gathered all order sets and identified one unit with catheter insertion on their admission order set and found that it was responsible for the majority of the unnecessary catheter insertions. Revising this order set seemed like an easy fix but due to the time needed to institute this change through our institutional forms committee, we again shifted our focus—this time to urinary catheters left in place on inpatient units. We had noted during PDSA cycle 2 that not only were catheters left in place for excessive duration but some nurses on the ward were frequently asking residents to reassess need for urinary catheters. We hypothesised that a medical directivei could be developed to give nurses greater autonomy in removing catheters on transfer to the ward.

In PDSA cycle 5, we first tested whether or not staff physicians could achieve consensus regarding reasons that warrant leaving a catheter in place on their ward. Canvassing the target physician group produced consensus, but some physicians raised concerns about whether the identified criteria would be interpretable by nurses in a consistent fashion in order to avoid inappropriate removal of urinary catheters in some cases. We tested this hypothesis—that nurses could apply the criteria—in PDSA cycle 6 through usability testing with six nurses. Feedback received during these cycles led to fine-tuning of the directive and development of a postcatheter care algorithm.5

After nurses on the unit felt the algorithm was ready, we tested in PDSA cycle 7 whether nurses would apply the directive in practice. We trained nurses on two units and during the first week performed audits that confirmed fidelity >80%. We also learned that nurses found it easier to apply the directive early in the morning (at 6:00) to allow the day shift nurse (who starts at 7:00) to provide postcatheter care later the same morning. The timing of the directive became standardised and during PDSA cycle 8, we completed a 4-week pilot study to test whether these intervention units would have lower urinary catheter utilisation without any associated inappropriate catheter removals. Based on promising results (table 1), the pilot was extended to a formal controlled before and after study over 3 months before spreading the medical directive to all medical wards.5

Lessons from example project

Understanding the theory of PDSA is easy, but putting it into practice is often harder. As this case illustrates, though, the work of fully engaging in the PDSA methodology pays off. The real-world examples of PDSA described here highlight the key benefits obtainable from the authentic application of this methodology (box 1). The most recognised outcome of PDSA is the progressive increase in confidence that the change under development will actually lead to an improvement; however, there are other underappreciated benefits of PDSA worthy of further discussion.

Box 1

Benefits from the authentic application of plan–do–study–act cycles

  • Efficient use of data—collecting just enough to inform the best action forward

  • Refine measures and data collection method (to ensure that baseline and intervention data are collected in similar fashion)

  • High ‘return on failure ratio’12 (valuable lessons learned with relatively little resources invested to learn)

  • Recognise necessary refinements to the intervention

  • Identify missing ingredients for the intervention

  • Anticipate what might go wrong during implementation

  • Increases confidence that the change under consideration will produce improvement

  • Engages stakeholders in development of the intervention

  • Minimises resistance when change is implemented

Learning about the problem and data collection method

Although we typically think of PDSA cycles as a way of deploying an intervention, the earliest testing might focus on simply learning about the local problem (cycles 1 and 2) or developing the intervention (cycles 3 to 6), before refining it during implementation (cycles 7 to 9). As argued by Reed and Card in a recent commentary, ‘the intended output of PDSA is learning and informed action’ and not necessarily improvement.3 In this case study, only the last cycle resulted in improvement, while the seven others provided learning about what changes were needed to lead to improvement. In PDSA cycle 1, an afternoon audit allowed us to quickly confirm the existence of a problem worth investing further time and resources, by refuting the null hypothesis that at least 80% of medical inpatients had appropriate indications for urinary catheterisation. We recognise that this initial step is not traditionally thought of as a PDSA cycle. Improvement teams generally jump to collecting baseline data with the intent of measuring the impact of different change ideas during subsequent PDSA cycles. However, conceptualising initial characterisation of the target problem as involving PDSA cycles offers several advantages.

First, since the intent was not to collect baseline data, there was no need to obtain an accurate estimate of catheter overuse, which would have required a much larger sample size. Additional data gathering to narrow the range of possible values for inappropriate catheter use would have been unnecessary at this early stage of the project. We simply wanted to show that the current state was not compatible with a reasonable target of 80% appropriate use. Second, the process of performing the audit also led to important insights that informed subsequent change ideas. For example, we incidentally learned that some nurses on the wards were frequently asking residents to reassess catheters. We inferred that these nurses would be more likely to adopt an intervention like a medical directive, which would be associated with increased autonomy. Third, these initial PDSA cycles helped uncover specific issues related to data collection methods that needed to be resolved in order to ensure that data are collected in a similar way throughout subsequent PDSA cycles. For example, we noted difficulty in adjudicating appropriateness for urinary catheterisation because some of the criteria like ‘critical illness’ were open to interpretation.5 This learning prompted us to apply a more objective measure of overall urinary catheter utilisation as our main outcome measure during baseline and intervention periods in PDSA cycle 8.

Efficient use of data

The cornerstone of PDSA is making rapid cycle changes.4 This ability depends on articulating a focused prediction and collecting just enough data to test it. Too often, improvement projects jump to the end game rather than identifying the smaller intermediate steps that need to be addressed to have any chance of success. For example, cycle 5 could have focused on testing whether the medical directive would lead to decreased catheter days, but we first devoted cycles 5 to 7 to confirming consensus and engagement among physicians, iteratively improving usability and optimising adherence of the directive, before finally evaluating its impact on catheter use in cycle 8.

Occasionally, there may be external pressures to implement the intervention, leading the improvement team to be concerned that these intermediate steps will delay the project from moving forward. Considering that over 80% of published PDSA studies gathered data less frequently than monthly, additional PDSA cycles may feel like putting the brakes on the project momentum.4 But it does not need to be this way. When focused predictions are combined with the efficient use of data, momentum only builds. In this case study, PDSA cycles lasted as short as 1–2 days (cycles 1, 2 and 4) to a maximum of 3–4 weeks (cycles 5, 6 and 8). Three cycles used qualitative data only (3, 5 and 6), while two of the cycles that involved quantitative data had sample sizes between 9 and 18 (cycles 4 and 7).

A recent review in this journal highlighted the value of small sample sizes in propelling PDSA cycles forward.9 For example, to confirm that fidelity of the medical directive was at least 80% in cycle 7, by observing 100% fidelity of our intervention, we only needed a minimum sample size of 12. In cycle 6, we stopped collecting data after six nurses because we identified important usability issues and knew that there was little point in collecting additional data until these were addressed.

Anticipating problems

To fully take advantage of the efficient use of data is to know the right questions to ask to inform each PDSA cycle. The iterative nature of PDSA allows interventions to be refined, but this is only possible when there is a clear and logical approach to moving the project forward.3 The prediction made should be based on foreseeable problems with the change idea that need to be specifically tested to verify their veracity and develop a mitigating strategy. These PDSAs may specifically address any or all of the following questions.

  • What component or ingredient may be missing in the intervention?

  • What potential refinements should be made to the existing ingredients?

  • What barriers to implementation could arise?

These ‘known unknowns’ need to be explored to determine whether or not the intervention will need to be modified to mitigate their impact. Each question will also determine what specific type of data will be necessary for this specific PDSA cycle.

In this case study, the potential need for refinement of our intervention was identified in cycle 5, when we became aware of the potential for nurses to be unable to apply these criteria as written. Cycle 6 was therefore devoted to confirming that nurses could operationalise these criteria and refining the directive as needed to allow them to do so. Cycle 5 also identified potential barriers to implementation with some physicians opposing the idea of a nurse medical directive. We were able to mitigate these concerns by assuring them that we would do usability testing with the nurses first. We also monitored this problem during cycle 8 by giving nurses a number to call if they ever received a difficult time from physicians for following the medical directive.

Parallel change ideas

Changing the intervention or adding a second intervention once the initial change has been deployed can be problematic. For this reason, traditional evaluative designs (including quasi-experimental or randomised trials) only test a single or multifaceted intervention at a time, in order to accurately assess its impact. In contrast, PDSA cycles do not always have to be linear and may overlap.10 ,11 In cycle 4, we reached the conclusion that the forms committee would need to revise the admission order set but, due to the slow turnaround time of our institutional forms committee, the team proceeded with cycle 5 in concurrent fashion. The order set was eventually changed 6 months later after the medical directive had already been piloted, so this change did not contaminate our evaluation of this intervention. Keeping track of the timing of implementation for overlapping PDSA cycles is critical in being able to determine the impact of the different change ideas.

High return on failure

The iconic schematic of PDSA cycles depicts elegant, perfectly circular wheels smoothly rolling up the ramp to improvement. In reality, some cycles lead to a failed attempt at improvement, while others pivot and sometimes cross paths with other lines of inquiry. Tomolo and colleagues highlighted this discrepancy between the teaching of PDSA and the reality with a picture that looks more like Salvdaor Dali's melting clocks, with multiple distorted PDSA wheels going up and down a bumpy road, acknowledging the many false starts, dead ends and backsliding that can occur as the project evolves.11 In this case study, the accountability form in the ED represented a dead end that never gained further traction in the project.

Since missteps and bumps in the road are an expected outcome of trying something new, it should not be surprising that not all PDSA cycles lead to a rewarding step forward. What is often unappreciated by those who are demoralised when change ideas are unsuccessful, is that the cycles that lead to disappointing results are often those that yield the most useful information about what to change and how to proceed. In a Harvard Business Review article on how to really learn from failure, Julian Birkinshaw introduces the ‘return on failure ratio’, where the denominator contains the resources invested in the project and the numerator represents the lessons learned.12 PDSA cycles are built to provide a high return on failure ratio since the investment to test a small scale change is usually minimal yet the lessons can be great. In our example, we learned within 2 weeks that our project in the ED was completely off course, and we shifted our attention to admitted inpatients.

Increasing stakeholder acceptance

Another tangible benefit of PDSA—often unappreciated—lies in its role in overcoming resistance and engaging stakeholders. It can be regarded as an effective change management strategy by allowing the project to gradually gain acceptance with each iterative cycle. In this example, we identified physicians who opposed the idea of a medical directive because they were concerned that nurses may not be able to recognise the appropriate indications for leaving a urinary catheter in place. These resisters were nonetheless willing to have us perform usability testing and a small pilot study to look for any adverse events and sharing these early results gradually led to increased acceptance. Nurses were also engaged in the project through PDSA cycles aimed at developing the intervention. In the process, they gained significant ownership of the medical directive that ultimately increased their willingness to lead this change. An alternative strategy that did not rely on iterative development where nurses were simply asked to use a medical directive created by physicians may not have achieved the same impact.

Conclusion

PDSA cycles constitute the cornerstone of the model of improvement and this method has obvious advantages when put into practice. The key to successfully harnessing this approach lies in making sure each cycle includes an explicitly stated prediction (or ‘plan’) and a test of change to answer the question. Doing so gives improvement teams a clearer purpose and direction each step of the way. Teams should perform self-assessment around the authenticity of PDSA application.

In table 2, we propose criteria that could be used for this purpose but recognise that these have not been tested. But it is hard to imagine any project that ticks off boxes in the left-hand column of table 2 as having authentically adhered to the model for improvement. If an initial change idea works without a hitch, do not kid yourself. Improvement efforts rarely proceed so smoothly. When improvement appears to have occurred seamlessly, probably you have not really improved anything, or you have not checked very carefully to confirm that real improvement has occurred. Even in the rare case where the initial project idea required no refinements, authentically executing the PDSA methodology still has benefits (box 1). Early in the project, these benefits include engaging stakeholders and increasing your confidence that the intervention will work. And, later in the successful project, you will have a greater understanding of how the specific changes you implemented led to improvement.

Table 2

Proposed self-assessment tool for plan–do–study–act (PDSA) applications

Acknowledgments

The authors gratefully acknowledge Greg Ogrinc, Patricia Trbovich and Gareth Parry for their comments on earlier drafts of this manuscript.

References

Footnotes

  • Competing interests KGS is the Editor-in-Chief of BMJ Quality & Safety.

  • Provenance and peer review Commissioned; internally peer reviewed.

  • i A medical directive is an order given in advance by physicians (or others authorised to write orders) to enable a qualified health professional (typically a nurse) to decide to apply the order under specific conditions without a direct assessment by the physician at the time.8For instance, a medical directive might authorise for triage nurses in the ED to obtain an ECG on a patient with chest pain without waiting for a physician to enter a direct order.

Linked Articles