According to 21 CFR, Part 820, medical device manufacturers are required to validate as well as monitor and control parameters for their processes. The guideline on Quality Management Systems does not specify how this is accomplished, only that “a process is established that can consistently conform to requirements” and “studies are conducted demonstrating” this.
How can you make the best decision? Thorough process development, optimization and control using appropriate statistical methods and tools is recommended for demonstrating that processes are both stable and capable. “We all want to make better decisions, whether in business or personally,” says Heath Rushing, an international speaker well-versed in the applicability of statistics. He is also Co-Founder of Adsurgo, LLC, a professional services company offering consulting and workshops focused on the use of analytics. “We live in a world of uncertainty—so we should be using data to make these decisions. That is what statistical analysis is about: making data-driven decisions.”
Mr. Rushing, co-author of the book, Design and Analysis of Experiments by Douglas Montgomery: A Supplement for Using JMP, will provide attendees of his OMTEC® 2018 session with guidance on the need for statistical methods in process validation. He will demonstrate ways to efficiently and effectively apply recommended statistical methods to process validation—with no statistical expertise needed. Using realistic process data, participants will learn how to apply tools, interpret results and draw meaningful conclusions throughout Installation Qualification (IQ), Operational Qualification (OQ) and Performance Qualification (PQ).
In preparation for the discussion, we spoke with Mr. Rushing about statistics, how they play into process validation and the benefits and challenges of statistical analyses.
Updated regulations and standards have placed a greater emphasis on risk management. How does risk management factor into statistical methods in process validation?
Mr. Rushing: Any decision without risk is not a decision! Statistical analysis accounts for different levels of risk in decision making. Also, I am a believer in the use of risk management tools (cause-and-effect diagrams, Failure Modes and Effects Analysis, fault tree analysis), prior to doing any experiments as well as both during and after experiments. The experiments should enhance our knowledge about risk.
For the non-statistician, what are some of the ways to efficiently and effectively apply statistical methods and tools to process validation?
Mr. Rushing: Too many times, non-statisticians use complex methods they don’t understand and can’t explain to someone. I would rather they use a simple method to start and build upon their knowledge. For example, start with a run chart vs. a control chart. Note that three of the top four Nelson control rule violations can be easily identified on that run chart, then build upon that knowledge by adding control limits. You can learn an awful lot by just visualizing your data. (Editor’s note: Nelson rules are a method in process control for determining whether a measured variable is out of control—unpredictable vs. consistent. Rules for detecting "out-of-control" or non-random conditions were first postulated by Walter A. Shewhart in the 1920s.)
Where do you see statistical analyses, in terms of process validation, headed in the next five years?
Mr. Rushing: Away from the hard-and-fast decisions that statistical analysis CAN provide to the product or system understanding that it SHOULD provide. For example, I can use a p-value to make a decision. If it is less than the p-value, there is a statistically significant difference. If not, I am back to what I assume to be true: there is no difference. However, I could also use a confidence interval that would allow me to characterize the statistical and practical difference. The latter is more helpful. I like the direction it is moving.
Do you find that people are sometimes afraid, for lack of a better term, of statistics? Perhaps “easily overwhelmed” is a better description. Why does this happen? Is it a lack of education on statistics?
Mr. Rushing: “Rapidly overwhelmed” is probably a better term to use! In my experience, this happens because people move from very simple methods to more complex methods in a very short period of time, without putting their knowledge into practice. And I think statisticians may be to blame for this. Very often I will have a statistician at a company who will want me to teach a course to their team on fairly complex methods, and request that I just skip over the steps to get there. Those steps are elementary for the statistician, but not the non-statisticians. The education and understanding are getting much better.
That is interesting. How can people educate themselves on statistics?
Mr. Rushing: Start with the basics, like simple data visualization; use it, and then learn the next step. Eat the elephant one bite at a time. There are so many free resources out there that people can utilize. I myself have done quite a few videos and webinars that are on the internet just to further the cause and increase the use. What you can easily do with the statistical software today is unbelievable. And a little training goes a long way!
Main photo courtesy of Shutterstock; inset photo courtesy of Heath Rushing