Quality Assurance And Contrl
1-Describe the company
2-What is the problem (case) being analyzed
3-What is the solution approach
4-what are the results found
5-Action plan or decision (s) made as a result.
Making the Case for Quality
At a Glance . . .
December 2015
• This teaching case study features characters, hospitals, and healthcare data that are all fictional.
• Upon use of the case study in classrooms or organizations, readers should be able to create a control chart and interpret its results, and identify situations that would be appropriate for control chart analysis.
• The case is best suited for MBA operations courses and modules, particularly those focused on operations/process improvement. It also could be used in a hospital setting at a facility that has embraced a continuous improvement philosophy.
Using Control Charts in a Healthcare Setting
by Jack Boepple
After spending 10 years on the road as a healthcare operations improvement consultant, Isabella “Izzy” Cvengros decided it was time to settle down. Although Cvengros loved what she was doing, she had recently become engaged and wanted to spend more time with her future husband. However, with fam- ily members spread throughout the country, there was really no “home” to go back to.
As a consultant, Cvengros had been assigned to a wide variety of healthcare projects over the years, learning a great deal. She also enjoyed seeing so many different parts of the United States, with a par- ticular fondness for New Hampshire, so she and her fiancé focused their job search there.
On October 10, 2014, Cvengros found herself with a good problem. She had just completed a series of interviews with two Nashua, NH, hospitals: Farrell Memorial Hospital and Penner Mobley Health Services, and both had gone very well. She interviewed for the same job at both facilities—director of operations improvement—and leadership from both facilities indicated she was proceeding to the final round of interviews, which entailed meeting each hospital’s executive team.
As a certified Lean Six Sigma Black Belt, Cvengros was thrilled to hear both hospitals’ progressive views on continuous improvement. While she saw examples of many quality tools and analyses being performed at each hospital, she did not notice any control charts being used.
Although control charts are typically associated with manufacturing processes, Cvengros knew they could be applied to any industry’s processes, including hospitals.
Because she had employed control charts with great success in several of her assignments, she incor- porated this experience as part of her interview responses. Both hospitals were intrigued and asked if she could provide an example during her next round of interviews. Cvengros agreed, but to make the analysis more meaningful, she asked each hospital to provide her with data so the example con- trol chart analysis would be more meaningful and relevant to them. Since one of the key discussion points during her interviews at both facilities revolved around reducing the patient’s length of stay, Cvengros asked for data on their estimated date of discharge (EDD) by week from January through September 2014.
About Farrell Memorial Hospital
Farrell Memorial Hospital is a 400-bed general medical and surgical hospital located in Nashua, NH. The hospital, which is part of a larger statewide healthcare system, has won numerous awards over the years, including for patient safety, performance in its rehabilitation patient care unit, and critical care excellence.
About Penner Mobley Health Services
Penner Mobley Health Services is a 500- bed general medical and surgical hospital in Nashua. Part of a regional health system, Penner Mobley has also received numer- ous awards, including being named a top-50 hospital in the country for the past six years, recognized for its high-quality and innovative nursing care, and recognized as a performance improvement leader.
Assessing Hospital Performance
The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) is a national patient satisfaction survey that asks patients about their experiences during a recent hospital stay.1 The responses are broken down into the following categories:
• Survey of patients’ experiences • Timely and effective care • Readmission, complications, and deaths • Use of medical imaging • Medicare payment • Number of Medicare patients
Within some categories are sub-categories. For example, in the “Timely and effective care” category, there were 10 sub-categories, including heart attack care, surgical care, and pregnancy and delivery care.
The results are maintained by Centers for Medicare and Medicaid Services, CMS.gov, (Medicare) and anyone can compare one hospital vs. another on Medicare’s Hospital Compare website2. Knowing this, Cvengros ran a report of patients’ experiences category results for both Farrell Memorial and Penner Mobley as compared to the state and national averages (see Table 1).
Both hospitals ranked below the state and national averages in many of the categories. Cvengros was surprised by the ranking of Penner Mobley Health Services as it is a Magnet®-recognized organization. Recognized by the American Nurses Credentialing Center, a Magnet® designation recognizes hospitals with high-quality, innovative nursing and best practices for patient care, particularly in the areas of nurse communication (Q1),
availability of help (Q3), and receipt of discharge information (Q8). Cvengros found it interesting their performance was rated lower than the state and national averages on nursing communication (Q1), the availability of help (Q3), pain con- trol (Q4), and explanation for medications (Q5).
Estimated Date of Discharge
One of the cost reduction approaches employed by hospitals is to reduce the patient’s length of stay (LoS). One of the strategies embedded within this approach is to actively plan the patient’s discharge. Like anything else, devel- oping a plan and setting a target date for completion of a task increases the likelihood the task will be completed on time (vs. no planning and/or coordina- tion of resources). While task planning is a project management fundamental (similar to work breakdown structure), it is still a relatively new concept in the healthcare field.
In hospitals, the “project teams” are composed of the patient’s physician, nurses, and ancillary professionals (such as pharmacists, physical therapy, occupational therapy, and social workers). The composition of the team was dependent upon the specific patient’s condition. So, there could literally be as many project
ASQ www.asq.org Page 2 of 6
Table 1 — HCAHPS Survey of Patients’ Experiences, October 2014
# Question Farrell Memorial
Penner Mobley
State Average
National Average
1 Patients who reported that their nurses “Always” communicated well
73% 77% 79% 79%
2 Patients who reported that their doctors “Always” communicated well
76% 81% 81% 82%
3 Patients who reported that they “Always” received help as soon as they wanted
62% 63% 70% 68%
4 Patients who reported that their pain was “Always” well controlled
66% 68% 72% 71%
5 Patients who reported that staff “Always” explained about medicines before giving it to them
61% 62% 67% 64%
6 Patients who reported that their room and bathroom were “Always” clean
65% 66% 74% 73%
7 Patients who reported that the area around their room was “Always” quiet at night
52% 63% 64% 61%
8 Patients who reported “Yes,” they were given information about what to do during their recovery at home
84% 87% 87% 85%
9 Patients who gave their hospital a rating of 9 or 10 on a scale of 0 (lowest) to 10 (highest)
64% 73% 75% 71%
10 Patients who reported “Yes,” they would definitely recommend the hospital
68% 77% 76% 71%
As noted by Jackie Birmingham, vice president of regulatory monitoring and clinical leader- ship at Curaspan Health Group,4 correctly estimating the date of discharge has several positive benefits/effects:
1. Improves care transition. According to Health Affairs (a journal of health policy thought and research): “The term care transition describes a continuous process in which a patient’s care shifts from being provided in one setting of care to another, such as from a hospital to a patient’s home or to a skilled nursing facility (SNF) and sometimes back to the hospital. Poorly managed transitions can diminish health and increase costs.”5 Estimating the date of discharge helped improve the communication/coordination with the patient’s post-hospital destination.
2. Improves expectations setting with patients and their families. Part of setting an estimated discharge date is communicating it with the patients and their families. While the estimate is just that—a target—the net effect is to improve the communications between all parties. Most people cope better with the known (vs. the unknown). By communicating the EDD to the patients and their families, some of the “mystery” is removed, and it brings them into the conversation.
3. Enables reduced unnecessary clinical variation in treatment. Standardized care plans for specific-case types (e.g., sepsis) lists a sequence of services needed by patients, based on an anticipated LoS. Criteria sets are used to monitor patients’ clinical progress to determine whether a continued stay is medically necessary. Embedded in both standardized care plans and criteria sets is a timing component, and the EDD helps quantify it.
4. Enables more efficient hospital operations. Capacity management, bed management, and patient throughput are all dependent on EDD.
5. Prepares for the future. Given the evolution of healthcare in recent years—with increased responsibilities for utilization reviewers, use of recovery audit contractors, and focus on denial management—justifying and documenting LoS has grown ever-more important. As such, EDD seems destined to be measured and monitored as some reimbursement metric.
ASQ www.asq.org Page 3 of 6
“teams” as there are patients. Depending upon the hospital, these project teams might be called different names, such as a multidisciplinary team or an inter-disciplinary team.
Whereas the review/update of a typical project plan might be done on a weekly basis, the “tasks” (i.e., patients) must be reviewed/updated on a daily basis. And with a large number of patients and various demands upon each caregiver specialty, coordinating a hospital project team can be a daunting task. Many hospitals have tackled this task by creating multidisci- plinary rounds.
According to the Institute for Healthcare Improvement: “With multidis- ciplinary rounds, disciplines come together, informed by their clinical expertise, to coordinate patient care, determine care priorities, establish daily goals, and plan for potential transfer or discharge. This patient- centered model of care has proven to be a valuable tool in improving the quality, safety, and patient experience of care.”3
One barometer to assess the effectiveness of multidisciplinary rounds was to measure EDD, which is one of the primary outcomes for each patient discussed during multidisciplinary rounds. Setting an EDD prompted active discussion on the barriers preventing a patient’s release. A natural byprod- uct of these discussions is to streamline the transition of care for patients (i.e., it helped reduce/minimize unnecessary clinical variation in treatment). It also fosters a team, rather than an individual, approach to patient care.
Mathematically:
EDD Accuracy =
Number of Estimated Discharges Actually Discharged
Number of Potential Discharges
And:
Number of Potential Discharges
= Number of Estimated Discharges Actually Discharged + Number of Estimated Discharges Actually Not Discharged + Number of Discharges Not Estimated
Although there is no ideal goal for EDD accuracy, higher is better than lower. A lower rate and/or a “stuck” rate, is symptomatic of a problem. It requires analyses to determine the cause of the problem. In general, a low rate is indicative that (a) the staff is not communicating effectively, or (b) not taking the estimation of discharge dates seriously.
Control Charts
All processes have variation. The challenge is to determine whether or not the variation is “common cause” (or random or “noise”) or “special cause” (or nonrandom variation).
According to iSixSigma, an online clearinghouse for process improvement, “common cause variation is fluctuation caused by unknown factors result- ing in a steady, but random, distribution of output around the average of the data.” Special cause variation is the inverse: variation caused by factors that result in a nonrandom distribution of output. It is also referred to as “exceptional” or “assignable” variation.6 Determining the cause of special cause variation typically requires further analysis/investigation.
For example, think of weighing yourself every morning. One day a six-foot man might weigh 201.2 pounds. The next day he is 200.4 pounds. The fol- lowing day he is 200.8 pounds. Over time, he is probably hovering around 201 pounds, +/- two pounds. His weight is demonstrat- ing normal (common cause) variation. Come Christmas holidays/vacation, however, his weight might balloon to 205 pounds. In this case, the “special cause” varia- tion is rather apparent: He consumed far more calories than he expended during the holiday. Unfortunately, identifying the special cause is rarely as straightforward.
Control charts show what type of variation is occurring in a process. Synonymous with statistical process control, control charts are a graphical view of a process. Special tests are conducted against the data to deter- mine (a) the normal limits/ variation of the process and (b) whether or not these limits have been “violated.”7 Control charts can also be considered a run chart “on steroids.” Run charts dis- play observed data in a time sequence.8 Control charts take the simple run chart and apply some statistical rigor to them. Basically, the mean (average) is calculated and drawn on a graph. The individual data points then are plotted on the same graph. Then, the control limits are also drawn as +/- three standard deviations from the mean.
Figure 1 is an example control chart (with no rules “viola- tions”). The middle (green) line is the mean. The upper and lower (red) lines are the upper control limit (UCL) and lower control limit (LCL).
When introduced in the 1920s, control charts were drawn on graph paper. More recently, specialized computer programs (such as Minitab) can create these automatically.
The type of control chart used depends upon the type of data— variable (continuous) or discrete (attribute). Figure 2 provides a decision tree on how to select the appropriate control chart. Figure 1, for example, is an NP chart.
ASQ www.asq.org Page 4 of 6
Sa m
pl e
C ou
nt
Sample
NP Chart of 3T
UCL = 25.90
LCL = 3.21
NP = 14.56
0
5
10
15
20
25
1 5 9 13 17 21 25 41 45373329
Chart of individuals
Moving average— moving range chart
_ X and R chart or _ X and s chart
p chart
np chart
u chart
c chart
Are the data measured on a continuous scale?
(e.g., time, weight, temperature)
Variable Data
The data are counted (e.g., defective items
or complaints) Attribute Data
Is each data point a natural subgroup? (such as one batch)
or Are data gathered
infrequently?
The number of defects are counted (and an item can
have many defects)
Are defective items counted?
Are the data normally
distributed?
Can sample size vary?
Can sample size vary?
No No
No
No
No
No
Yes Yes Yes
Yes Yes
Yes Source: Nancy R. Tague’s The Quality Toolbox, Second Edition, ASQ Quality Press, 2005.
Figure 1: Example of a Control Chart
Figure 2: Control Chart Decision Tree
ASQ www.asq.org Page 5 of 6
The types of tests that can be run to determine whether a process is out of control varies by data type. Continuous data has more tests, but both attribute and continuous data have the same core four tests:
1. 1 point > 3 standard deviations from center line
2. 9 points in a row on same side of center line
3. 6 points in a row, all decreasing or increasing
4. 14 points in a row, alternating up and down
The first test is the one typically associated with control charts (see Figure 3 for an example)—a point outside the control limits.
Sa m
pl e
C ou
nt
Sample
NP Chart of 4s
UCL = 9.35
LCL = 0
NP = 3.64
0
2
4
6
8
10
12
14
1
1
1
5 9 13 17 21 25 41 45373329
Figure 3: Example of a NP Control Chart With Test No. 1 Violated
ASQ www.asq.org Page 6 of 6
The number of points associated with each test above is essen- tially an industry standard, although software programs (such as Minitab) allow you to configure/modify those values to suit your specific need/application.
When a test is violated, it is up to the user to determine the spe- cial cause. Test violations are not necessarily negative in nature as they may indicate either a favorable or adverse shift in a process. Or they just might indicate an abnormal event that the process failed to handle.
Results
What is the variation trying to tell us about a process, about the people in the process?
—W. Edwards Deming
During their interviews with Cvengros, leaders from both hos- pitals (rather proudly) claimed improvements in the accuracy of the EDD—providing examples to Cvengros (see Table 2).
Cvengros knew her control chart analysis would show whether or not they were reacting to normal variation or whether their actions had resulted in a statistically significant favorable change.
Even though it was only one data point, Cvengros felt her analy- sis could be a proxy on the hospitals’ willingness to change and adapt. In other words, she would have a quantitative way to assess whether the actions of the healthcare leaders back their words on continuous improvement.
For More Information
• To contact the author of this case study, email Jack Boepple at j-boepple@kellogg.northwestern.edu.
• To view this and other case studies, visit the ASQ Knowledge Center’s Case Studies landing page at asq.org/ knowledge-center/case-studies.
References
1. HCAHPS, http://hcahpsonline.org/home.aspx. 2. Medicare Hospital Compare, http://www.medicare.gov/
hospitalcompare/. 3. Institute for Healthcare Improvement. “How-to Guide:
Multidisciplinary Rounds,” http://www.ihi.org/resources/ Pages/Tools/HowtoGuideMultidisciplinaryRounds.aspx.
4. Curaspan. “Estimating the Date of Discharge: Five Reasons to Do It,” http://connect.curaspan.com/blog/estimating-date- discharge-five-reasons-do-it.
5. Health Affairs. “Improving Care Transitions,” http://www. healthaffairs.org/healthpolicybriefs/brief.php?brief_id=76.
6. iSixSigma. “Common Cause Variation,” http://www. isixsigma.com/dictionary/common-cause-variation; “Variation (Special Cause),” http://www.isixsigma.com/ dictionary/variation-special-cause/.
7. Wikipedia. “Statistical process control,” http://en.wikipedia. org/wiki/Statistical_process_control.
8. Wikipedia. “Run Chart,” http://en.wikipedia.org/wiki/ Run_chart.
About the Author
Jack Boepple is a process improvement professional and adjunct professor at the Kellogg School of Management. He received his Project Management Professional (PMP) certification in 1999, his ASQ Six Sigma Black Belt certification (CSSBB) in 2007, his ASQ Manager of Quality/Organizational Excellence certification (CMQ/OE) in 2007, and his Professional in Healthcare Quality (CPHQ) certification in 2015. He served as an examiner for the Baldrige Performance Excellence Program from 2008–2010 and 2012.