“Why haven’t our continuous-improvement programs over the past 10 years given us sustainable improvements? We’ve focused on most of the top five ‘improvement tools’ with very little result. What are we missing?”
This question is being asked more and more these days. The good news is that it’s an excellent question. The better news is that someone in a leadership role is asking such a question. The not-so-good news is that there are far too many stalled continuous-improvement initiatives that should be similarly questioned.
What’s missing in too many continuous-improvement (CI) initiatives? The people? Leadership? Improvement tools? Overall purpose? Compelling need? In many cases, it’s all of the above.
Danger in the comfort zone
Frequently, the CI initiative itself drives the quest for improving business performance. Whether the initiative is TPM (total productive maintenance), RCM (reliability-centered maintenance), 5S, Lean, or something else from a long list of options, the intentions are almost always good. Each initiative requires a new perspective on how to get new things done to achieve new results. That, quite frankly, tends to be the fun part of rolling out new initiatives: new training on new tools to create a new mindset for solving old problems.
CI training and tools can be a pleasant departure from the run-of-the-mill problem solving—pleasant, that is, for some people. Others will choose not to be involved. They’re more comfortable with “the way we’ve always done things around here.” Change is not a priority. “We need stability, consistency, standard ways of doing things around here. Change is too risky.”
For many individuals in today’s workplace, there are comfort zones where routines prevail over the will to improve. This inertia of the past can be difficult to overcome. This, quite often, is where new CI initiatives come into play. “Let’s get everyone involved. That way they’ll see what can be improved and how they can pitch in to achieve new and higher levels of performance.” Unfortunately, it doesn’t always work out as planned. Soon after the CI initiative rollout, things fall back into “the way-we’ve-always-done-things-here routine” (the comfort zone.) The situation reflects a culture defined by the past, i.e., “how we’ve done things that have made our business successful all these years.”
So, who’s pushing the CI rope uphill? Why isn’t everyone helping to pull it? Simply put, “They don’t know what they don’t know.” Consequently, all that CI training, multiple show-and-tell CI events, and countless measurements of CI deployment don’t seem to work. We must begin asking, “What don’t they know that they need to know?”
Initiatives versus evidence
About eight years ago, leadership at a certain plant began deploying machine-data collection devices so everyone could see how critical equipment was performing. It was a great engineering project, one that was intended to set the foundation for numerous CI initiatives targeting specific business-improvement needs. The project spanned a good four years and, eventually, several data-collection tools and associated displays were deployed.
The displays communicated, in scoreboard style, how the machines were running and when they were down or in a changeover mode. Most important, though, was the fact that they all spelled out the reasons for unplanned downtime. A plus was that these displays also showed planned production rates versus actual rates, and flashed the information for all to see. (One area manager even had engineering program the displays to show breaks and lunch and the time remaining, which seemed like Big Brother informing workers when they could take a break or go to lunch and how much time remained until the machines needed to be up-and-running again. Some saw that as a positive side benefit of the downtime displays.)
This initiative was labeled a success. Not much else came of the project, however.
On the other hand, what became of all that data residing inside the display-unit memory systems? The plant’s engineering team realized the capabilities of the displays went well beyond what one could see. For the most part, the rest of the plant staff and management didn’t know what they didn’t know about the captured data. While runtime/downtime status was automatically logged—and operators sometimes logged the downtime reasons—there was no evidence of this information ever being looked at let alone put to use.
Mining data for all to see
Were these CI-led equipment-downtime data-collection displays worth salvaging? Digging into just one of the critical machines, a constraint in the production flow proved quite revealing. “But, what can nearly 8,000 data entries from the past two months possibly tell us? Most downtime reasons are labeled ‘None’ anyway.” (They don’t know what they don’t know.)
But what if we were to take all that data, sorted out the “None” reasons for downtime, and tried to see what they were telling us?
The evidence pointed to a great many different downtime reasons over the two-month period. The operators really were capturing downtime reasons. A cursory analysis revealed the most-frequent reasons to the least-frequent reasons for a machine being down. (They don’t know what they don’t know.)
What is the value of knowing the downtime frequency for any reason if you don’t have the duration of downtime events? The downtime frequency without duration is what I affectionately call the “pain-in-the-butt factor.” All we know is how many times this thing happens. And the more it happens, the bigger the pain.
For machine-downtime data to be meaningful to the business, we need to understand not only the reason and frequency, but also the duration of the downtime. That’s when the true business impact of chronic downtime can be determined and specific countermeasures put in place to minimize, if not eliminate, the downtime cause.
Confronting plant personnel with a treasure trove of captured data from machine-downtime display units can divide them into two camps: some will embrace the data as a first step in the CI journey; others will dispute the data’s validity and continue doing things the same way they’ve always been done. Let’s call the second group “informed naysayers.” They quickly remark: “We’ve tried to use those things in the past, and look where it got us. We shouldn’t trust the data because those operators are probably putting the wrong downtime reasons into the system.” (They don’t know what they don’t know.)
Continuous-improvement initiatives—regardless of their intent—must focus on meaningful business cases and compelling opportunities for improvement, and be built upon evidence rather than opinions from the comfort zone. There will be times in any CI journey when someone, i.e., from upper management or the plant floor, becomes vocal opposition. (They don’t know what they don’t know.)
Make sure that you use actual equipment data to define your CI activities and show significant improvement as measured by the information being collected, analyzed, and acted upon by people closest to the machines. Help the naysayer crowd at your site learn more about what they don’t know as a part of their culture-changing paradigm shift. MT
Bob Williamson, CMRP, CPMM, and member of the Institute of Asset Management, is in his fourth decade of focusing on the “people side” of world-class maintenance and reliability in plants and facilities across North America. Contact him at RobertMW2@cs.com.