NPC Archive Item: Diagnostic error: more about problems in thinking than problems with knowing

NOTE – This is an archive post from the NPC and has not been updated since first publication. Therefore, some hyperlinks may no longer be working.
MeReC Rapid Review NPC Logo

28 July 2011

In a retrospective audit, published in 2005, 100 cases of diagnostic errors were used to determine the significance of the roles that cognitive and system-related factors play in diagnostic errors. Diagnostic errors were rarely caused by inadequate knowledge. The most common reasons for misdiagnosis were system-related factors, cognitive-related factors or a combination of both.

Level of evidence:
Level 3 (other evidence) according to the SORT criteria

Action
Adequate knowledge is obviously necessary, but it should not be assumed that this is sufficient to ensure good decision-making. All people involved in making decisions in health care (prescribers, other health care professionals, managers, patients and carers) should consider the processes by which they make decisions. The NPC has produced a DVD ‘Making decisions better’, which may assist with this.

What is the background to this?
Diagnostic error is relatively common (rates of 10-15% have been suggested) but little is known about the different types and causes of this kind of medical error. To understand how these errors arise and how they might be prevented in the future, the authors systematically examined the aetiology of error using root cause analysis . A retrospective audit was performed analysing 100 cases (comprising autopsy discrepancies, quality assurance activities and voluntary reports), collected over a 5 year period from five large tertiary medical centres in the US. Misdiagnoses were confirmed by tissue, biopsy specimens, X-ray studies and pathognomic clinical findings or procedural results. Factors contributing to the errors were classified as system-related or cognitive factors.

The impacts of the misdiagnoses was recorded using a scale which multiplies the likelihood of recurrence (score of 1-4 with 4 being the most frequent) by the severity of harm caused by the misdiagnosis (score of 1-4 with 4 being catastrophic injury).

What does this study claim?
The average impact score on the VHA scale was substantial: 3.80 ± 0.28 (mean ± Standard Error of the Mean [SEM]). Causes of the errors were categorised as:

  • no fault errors
    ◊ masked or unusual presentation of disease
    ◊ patient-related error (uncooperative, deceptive)
  • system-related errors
    ◊ technical failure and equipment problems
    ◊ organisational flaws
  • cognitive errors
    ◊ faulty knowledge
    ◊ faulty data gathering
    ◊ faulty synthesis

Overall, 228 system-related and 320 cognitive factors were identified. On average 5.9 factors contributed to the diagnostic error in each case. No-fault factors were involved in 44% of cases, cognitive errors were involved in 74% of cases; and system-related factors contributed to 65% of cases.

The majority of system-related factors (215 instances) were related to organisational problems, and a small fraction (13 instances) involved technical and equipment problems. The factors encountered most often related to policies and procedures, inefficient processes, and difficulty with teamwork and communication, especially communication of test results. Many error types were encountered more than twice in the same institution.

The most common category of cognitive-based factors was faulty synthesis, or flawed processing of the available information (265 instances). The single most common phenomenon was premature closure: the tendency to stop considering other possibilities after reaching a diagnosis (such as diagnosing musculoskeletal pain in a patient with a ruptured spleen following a car crash), a cause of error in 39 cases. Other common synthesis factors included:

  • faulty context generation (for example missing a perforated ulcer in a patient with chest pain and laboratory evidence of myocardial infarction);
  • misjudging the salience of a finding (such as diagnosing sepsis in a patient with stable leukocytosis, caused by myelodysplastic syndrome)
  • faulty perception (such as missing a pneumothorax on a chest X-ray)
  • failed use of heuristics, or mental shortcuts (for example diagnosing bronchitis in a patient with a pulmonary embolism).

Faulty context generation and misjudging the salience of a finding often occurred in the same case (15 of 25 instances). Perceptual failures most commonly involved incorrect readings of X-ray studies by internists and emergency department staff before official reading by a radiologist. Of the 23 instances related to heuristics, 14 reflected the bias to assume that all findings were related to a single cause, when a patient actually had more than one condition. In seven cases, the most common condition was chosen as the likely diagnosis, although a less common condition was responsible.

Faulty data gathering was identified in 45 instances. The dominant cause of error in this category lay in ineffective, incomplete, or faulty workup (24 instances). For example, the diagnosis of subdural hematoma was missed in a patient who was seen after a motor vehicle crash because the physical examination was incomplete. Problems with ordering the appropriate tests and interpreting test results were also common in this group.

Inadequate knowledge was identified in only four cases, each concerning a rare condition. Seven cases involving inadequate skills involved misinterpretations of X-ray studies and electrocardiograms by non-experts.

Cases with only cognitive factors causing the diagnostic error (28% of cases) produced a larger impact score than cases with only system-related errors (19% of cases); with scores of 4.11 ± 0.46 and 2.54 ± 0.55 respectively (p = 0.03). Errors due to a combination of system-related and cognitive factors (46% of cases), created the largest impact score of 4.27 ± 0.47.

So what?
This study highlights some important issues. It has some limitations: firstly, it was confined to general medicine, and so may not be representative of other areas of medicine. In addition, the selection method used is not sufficiently described to explain how the authors minimised bias in selection of cases. It was necessarily retrospective and subjective in nature, and although the authors attempted to minimise hindsight bias by using standard approaches, recollections of the clinicians involved may have been distorted by the time lag and their knowledge of the outcomes. It is also difficult to discern exactly how a given diagnosis was reached, and also to capture the other factors which may affect decision making: stress, fatigue, distractions, etc. Nevertheless, practitioners and managers can take some important messages from it.

The concept known as ‘bounded rationality’ describes how the human brain has a limit to the amount of information it is able to use in decision-making. Dual process theory explains that humans process this information in one of two ways, and tend to favour System 1 processing. This ‘intuitive, automatic, fast, frugal and effortless’ process involves the construction of mental patterns, shortcuts and rules of thumb (heuristics) through experience and repetition. The alternative – System 2 processing – involves a careful, rational analysis and evaluation of the available information. This is effortful and time consuming. Elstein and Schwartz describe several approaches to diagnostic reasoning – testing hypotheses, pattern recognition, and opinion revision as clinical information becomes available – as well as some of the sources of error, such as availability (recent or vivid events are perceived as more frequent than they are), and representativeness (over-reliance on key features because they imply similarity to other cases with other salient features being ignored).

The authors of the study discussed here suggest possible strategies to reduce the incidence of diagnostic errors. Parallels may be drawn with decision-making in other contexts (such as treatment selection for individual patients, policy decisions on managed entry of new medicines), etc.

Firstly, the high prevalence and importance of system-related factors suggests that addressing these might reduce errors. An example suggested by the authors is ensuring that radiologists are available to interpret X-rays. In other contexts, examples might include ensuring that pre-packs of ibuprofen or naproxen are available, rather than diclofenac, in view of their preferable cardiovascular side-effect profile (see MeReC Rapid review 2451)

Interventions to reduce cognitive errors are, however, a more complex problem. The study suggest that the clinicians involved had sufficient medical knowledge, but that problems arose from inappropriate cognitive processing and/or poor skills in monitoring one’s own cognitive processes (metacognition). Inherent in decision-making is the human trait to make fast decisions based on past experiences and much of this behaviour is ‘hard-wired’ as part of our evolution. Interventions to countermand behaviours hard-wired by hundreds of thousands of years of evolution may be elusive.

The authors of the study discussed here suggest that the following strategies hold promise:

  • Orientation; by teaching clinicians how they make decisions and the frequent obstacles and pitfalls faced, this would enable them to avoid such problems. This could be applied to all contexts of decision making
  • Combating the tendency to premature closure (the most common cognitive factor identified) by purposefully pursuing alternative diagnoses (or, in other contexts, alternative treatment options, etc).
  • Using the technique of ‘prospective hindsight’: the crystal ball experience. Clinicians are asked to assume that their working diagnosis (or management plan, etc) is wrong, and then to ask themselves what alternatives should be considered
  • The use of ’forcing functions’ to augment the clinician’s metacognitive skills; for example, expert computer systems which produce reminders and prompts for clinicians during the decision-making process.

The author of this Rapid Review was Greg Brown, a student considering an application to medical school and who was working at the National Prescribing Centre as work experience.

Please comment on this rapid review using our feedback form.

Make sure you are signed up to NPC Email updates — the free email alerting system that keeps you up to date with the NPC news and outputs relevant to you