Data-Driven Solutions to Common Preanalytical Issues Part 1: Overview [Hot Topic]

darci-block-15179915
Darci Block, Ph.D.

The preanalytical phase of laboratory testing has the greatest potential for error, with 70% of laboratory testing errors occurring at this point. Quality indicators and event management are essential to identify and solve problems. Dr. Block presents an overview of methods to identify and prevent preanalytical errors and offers strategies to gather data to assist in improvement of identified issues.

Presenter and Credentials:
Darci Block, Ph.D., Director Laboratory Services and Co-Director Central Clinical Laboratory in the Department of Laboratory Medicine and Pathology at Mayo Clinic in Rochester, Minnesota

Transcript

Our speaker for this program is Darci R. Block, PhD, Director Laboratory Services and Co-Director Central Clinical Laboratory in the Department of Laboratory Medicine and Pathology at Mayo Clinic in Rochester, MinnesotaWelcome to Mayo Medical Laboratories Hot Topics. These presentations provide short discussion of current topics and may be helpful to you in your practice. Today our topic is an overview of data-driven solutions to common preanalytical issues.

Dr. Block, thank you for presenting today.

I have no disclosures.

This presentation will cover the different phases of testing and where the opportunities for errors exist.  We will discuss some methods for identifying and potentially preventing preanalytical errors, as well as strategies for gathering data to aid in process improvement.   In Parts 2 and 3 of this series, I will present cases where we have gathered data to make preanalytical quality improvements which include emergency department redraws and leaking urine containers.

It’s well known and documented that the majority of laboratory testing errors occur during the preanalytical phase of testing prior to the specimen being placed on an analyzer.  ü  The flow diagram demonstrates some of the errors that may be encountered as well as the reported frequencies with which they have been reported to occur.  One limitation commonly cited in these types of studies is the fact that under reporting or under detecting such errors is bound to happen so these are most likely conservative frequency estimates.

So we ask ourselves, “how do you know when there is a pre-analytical issue?”  In the testing laboratory we run quality control samples whose purpose is to detect when something may have changed that could compromise results.  This is not done as easily during the preanalytical phase.  ü  We rely on quality indicators whose definition is some objective measure based on evidence that monitors the consistency across settings over time.  They should be designed to cover all steps of the preanalytical phase.  ü  The other activity that is particularly useful for monitoring quality in the preanalytical phase is the use of event management where errors are investigated and corrected and ideally prevented from happening again in the future.

The IFCC working group on Lab Errors and Patient Safety has developed recommended quality indicators which they suggest are harmonized and should be collected and monitored by all clinical laboratories.  I encourage this audience to download the paper to read more about this proposal as many details have been omitted in this adapted version in this table.  However, you can see that the entire process from the test request to the suitability of the specimen is covered by a recommended quality indicator.  This group hasn’t vocalized what the targets or benchmarks should be, however harmonization is an important first step to ensure that everyone is speaking the same language.

So once your quality indicators are in place and bumping along, you notice one that isn’t meeting the goals.  What you do next is critical for any successful quality program.  There is not much value if you continue to monitor and never do anything to address an out of control quality indicator.  It needs action!  What kind of action?  Here are just a few tools that are easily referred to when investigating and improving a process that may be in need of attention.  You might start with root cause analysis to try to get to the bottom of the actual cause for the error or problem.  PDSA is a nice tool I think of as a well-controlled “pilot study” where a small change is made that may or may not end up being temporary.  During the pilot, you will be gathering the data necessary to determine whether or not the temporary change should be implemented permanently.  Finally, for complex or high-risk process improvements the more structured DMAIC Process may be more appropriate, incorporating Lean and Six-sigma concepts.

This diagram is meant to show all the possible sources of data that may have value for various process improvement projects.  It is imperative as previously mentioned that the root cause is determined.  It is important to measure the baseline error rate or the frequency that the unwanted action or inaction is taking place; otherwise you will never have objective evidence that a change has impacted it.  It is useful in many cases to map out the process in question so its interconnectedness with other processes and potential stakeholders can be appreciated.  It is important to involve stakeholders and the first step is identifying them.  It is good to outline the goals and requirements for the project so you know whether it was successful or not.  And cost analysis whenever applicable.  It is important to remember that there are many hidden costs you may not be aware of for a particular process.  There is a nice review on this subject in the American Journal of Clinical Pathology that I highly recommend.

If you read that article, the take home message is it is more cost effective to prevent errors from happening in the first place than to try to find them later in the process before issuing a corrective measure. Here are some strategies for preventing preanalytical errors.  They include developing clear SOPs and standardized training to those SOPs.  It is also helpful to automate processes and standardize them as much as possible.  The best way to be successful is to design processes where the right thing to do is also the easiest.  As a corollary, there will be fewer errors if the wrong thing to do is hardest.  We have also discussed developing and monitoring quality indicators, launching investigations when they aren’t meeting targets.  And finally, communicating quality metrics and any process improvements broadly can never hurt.

In conclusion, the preanalytical phase of testing has one of the greatest opportunities for error.  The challenge is that the processes are often quite manual and there isn’t an easy way to monitor quality in real time.

  • Ultimately well designed quality indicators and event management with proper investigation and follow-through are key to identifying problems.
  • Meaningful data should be gathered from multiple sources which may include literature, clinical or laboratory guidelines, baseline error rates, staff surveys, etc...  Whenever possible, automated or electronic data collection is ideal.
  • It’s important to consider all stakeholders when designing and implementing process improvements.

I leave you with this quote that I stumbled across the other day and thought it very fitting for this topic.  “If you don’t ask the right questions, you don’t get the right answers. Only the inquiring mind solves problems.”

If this topic was of interest to you please join us for the Phlebotomy Conference here in Rochester, Minnesota, April 23rd to the 24th, 2015.

mmledu

MML Education

This post was developed by our Education and Technical Publications Team.