Archive | Preventive Maintenance

110

6:02 pm
March 18, 2016
Print Friendly

Final Thought: RCM — Great Tool or Ravenous Monster?

Cklaus01By Dr. Klaus M. Blache, Univ. of Tennessee, Reliability & Maintainability Center

Reliability-centered maintenance (RCM) is a process designed to establish the safe minimum level of maintenance for each piece of machinery/equipment in a facility. It’s concerned with maintaining functionality of individual components in an entire system. Many companies are aspiring to do it. Others are doing it partially. A smaller number do it regularly. Some start and then stop. What’s going on?

While there are superficial variations of the methodology to differentiate for marketing and some differences between full or classical RCM and shortened versions, should RCM really stand for “resource-consuming monster?” Let’s first look at some key historical documents.

As summed up on the back cover of John Moubray’s 1997 RCM2 book Reliability-Centered Maintenance, (Industrial Press, New York), RCM is “a process used to determine systematically and scientifically what must be done to ensure that physical assets continue to do what their users want them to do.” RCM2 knowledge came from early studies in the military.

One of the most referenced documents is the 1978 U.S. Department of Defense AD-A066579 Reliability-Centered Maintenance report by Stanley Nolan and Howard Heap (both with United Airlines). Their study generated the six failure curves you see in every RCM-related presentation.

Showing that age-related failures account for only about 11% of all failures drives much of the optimization of maintenance tasking. In 1996, the NAVAIR 00-25-403 report introduced Guidelines for the Naval Aviation Reliability-Centered Maintenance Process.

I’m personally familiar with SAE JA1011 (1999), which provides the minimum criteria for what should be in an RCM process. My reliability and maintenance team at General Motors was involved with Ford, Chrysler, Boeing, Caterpillar, Pratt & Whitney, Rockwell International, and many other contributing organizations to create a reliability and maintainability guideline. The result was a 1993 publication by the National Center of Manufacturing Sciences Inc., Ann Arbor, MI, and the Society of Automotive Engineers (SAE), Warrendale, PA. It was titled Reliability and Maintainability Guideline for Manufacturing Machinery and Equipment (publication M-110).

Regardless of the RCM process you plan to use, know that it will consume scarce operational and support resources. It’s important to determine what time is available and put it in your business plan.

An RCM analysis, among other things, requires an FMEA (failure modes and effects analysis) and concludes with PM optimization (selecting the best failure-avoidance strategy). Preventive maintenance (PM) optimization is a streamlined methodology that identifies failure modes and develops PM tasks to minimize/avoid failures.

Based on your improvement needs, allocate adequate time for each level of RCM. For critical and complex issues, do full RCM. For moderate issues, do an overall FMEA for similar equipment/components. For less-critical areas, just doing a PM optimization will be a good start. This approach can free up resources to do more crucial problem solving and predictive and preventive tasks. Identifying the annual total time available can help prioritize the levels of analysis to do.

I’ve found that if sufficient time is spent preparing for classical RCM, boundaries are clearly identified, and scope-creep is managed during the event, full RCM doesn’t take much longer than shortened versions. Even simple things done prior to an RCM event, i.e., completing, with participant input, a draft of the three ranking scales (severity of problem, likelihood of occurrence, and likelihood of detection) can save time. If you start RCM/FMEAs without an implementation strategy, the resource-consuming monster will swallow you.

Many RCM-process variations can work if they follow SAE JA1011 and are conducted under the proper circumstances. You must do adequate readiness investigation and preparation, however, to understand the limits, risk, and consequences of your chosen path. Used correctly, RCM is a great tool. MT

Based in Knoxville, Klaus M. Blache is director of the Reliability & Maintainability Center at the Univ. of Tennessee (UTK), and a research professor in the College of Engineering. Contact him directly at kblache@utk.edu.

864

5:20 am
March 18, 2016
Print Friendly

“Pit Crews” Keep Snacks On Track

Cheetos snacks move through an accumulation conveyor at the Perry, GA, Frito-Lay manufacturing facility.

Cheetos snacks move through an accumulation conveyor at the Perry, GA, Frito-Lay manufacturing facility.

High-performance machines require highly skilled professionals who use a race-car team approach to maintenance and reliability at Frito-Lay’s largest North American manufacturing facility.

By Michelle Segrest,  Contributing Editor

Lay's potato chips move up the potato chip incline conveyor to seasoning.

Lay’s potato chips move up the potato chip incline conveyor to seasoning.

The one million-sq.-ft. Frito-Lay manufacturing facility in Perry, GA, operates like a well-oiled, high-speed race-car track.

The operations teams drive the machines, but it’s the 100 maintenance professionals on five specialized teams who work in the garage and in the pits to build, repair, and optimize the equipment—taking it from the shop to the track. They ensure the production stays in constant motion as it circles the refined Frito-Lay course, around-and-around, nonstop, 24/7.

Perry’s director of maintenance and engineering, Craig Hoffman, is the crew chief. The overall maintenance philosophy requires proactive maintenance and methodologies, he said. However, just like a race-team pit crew, they must have the ability to respond to unexpected issues.

“NASCAR teams spend a lot of time in their shops building their cars, analyzing, making adjustments, and fixing problems. We use similar techniques,” Hoffman said. “Our foundation is planning and scheduling, which is supported by preventive and predictive maintenance and root-cause analysis. We do everything we can to make sure our equipment is ready to perform.”

In a facility that produces thousands of pounds of potato chips, tortilla chips, and many other Frito-Lay products per hour, the equipment must stay in optimal condition to deliver high-performance production, he said.

“Our job is to turn the equipment over, in the best possible shape, to the operations group. But every race day there is a situation where you have to respond. When something happens, we go into the pit-crew mentality—it’s all hands on deck. What is constantly on our minds is how to keep our equipment in safe, reliable, food-safe condition so that the drivers can continue to move the lines around the track. We do a great job upfront with our proactive technologies. I would love to say we are perfect. When, however, you have as much equipment as we do, something is going to happen. And we have to be able to respond.” 

The different teams play different roles, yet all share a common goal: to produce millions of pounds of snack foods annually.

The Perry facility houses 15 manufacturing lines that produce all flavor varieties of Frito-Lay snacks, including Doritos, Cheetos, Tostitos, Ruffles, Lay’s, Fritos, SunChips, Stacy’s, Smartfood, Rold Gold, and Funyuns. Built in 1988 with just two lines, the largest of Frito-Lay’s 36 North American manufacturing facilities has built several expansions in nearly three decades, including three production lines in the past 14 months.

Maintenance philosophy

Doritos nacho-cheese-flavored chips travel through the distribution system to packaging. Photos: Michelle Segrest.

Doritos nacho-cheese-flavored chips travel through the distribution system to packaging. Photos: Michelle Segrest.

Hoffman’s team is responsible for the maintenance of countless pieces of equipment, including fryers, ovens, extruders, a fleet of automated vehicles (including cranes and robots), weighers, kettles, pumps, motors, instrumentation, packaging equipment, seasoning-application equipment, boilers, air compressors, air dryers, switch gears, bag-packaging tubes, and several miles of conveyors throughout the facility.

The site’s maintenance professionals are divided into five teams that cover all facets of the facility:

  • core plant – includes all of the machines that manufacture, package, and process the larger, core products such as Lay’s and Doritos
  • bakery area – manufactures, packages, and processes baked products
  • facility – handles buildings, grounds, infrastructure, boilers, compressors, steam system, and other related equipment
  • warehouse – takes care of the shipping and distribution equipment, and all palletizing equipment, robots, and cranes
  • controls – manages the controls infrastructure, all operator interface terminals, PLC programming, and IT systems.

Hoffman teaches planning classes to all Frito-Lay employees. “I always cite the example of changing oil in the car,” he said. “Most people tell you put the car up on blocks, drain the old oil, then put in the new oil. When I change the oil, I go into my shop first and make sure I have the oil filter. I make sure I have the oil. I make sure my jack is in good condition, and I have jack stands for safety. Then I make sure it is time to change the oil. A lot of people tear right into a project without having the right parts or the right information to do the job. To me, this is all about planning.”

“Another example is when you go on vacation,” Hoffman said. “I don’t know anyone who just wakes up one morning and says, ‘I’m out of here.’ You plan the vacation. You decide where you are going to go, what you are going to do, where you will stay. You buy tickets. You put a plan together before you go tackle that vacation, just like we would put a plan together before we would tackle any job. We are making sure we have the right parts, the right information, and the right tools to go execute good work.”

The work comes from the facility’s PM (preventive maintenance) system. Operators provide insight on how their machines are running. Then the maintenance team maps out a plan to restore the equipment to the optimal operating condition. When the plan is set, they schedule and execute it. “If you don’t have a plan, you have no control. If you fail to plan, you plan to fail.”

Even though it is a low percentage of the time, unplanned maintenance also happens, according to Jim Northcutt who is in charge of all maintenance and engineering for Frito-Lay’s 36 North American facilities. He coordinates the facility maintenance managers from the corporate office in Plano, TX, and executes a streamlined maintenance approach across all facilities.

“The company, as a whole, runs very efficiently,” Northcutt explained. “When we do have an unplanned event, the maintenance managers get their team marshaled around making sure they have the right tools and the right expertise to get it corrected and back online. There is not a silver bullet there. It is just really good people who work in our organization who are very talented.”

Best maintenance practices

Maintenance mechanics Mike Day and Dave Maddox oversee shop rebuilds.

Maintenance mechanics Mike Day and Dave Maddox oversee shop rebuilds.

Planning and scheduling is supported with an in-depth PM system, along with highly upgraded technology such as vibration analysis and ultrasound, and carefully crafted PdM (predictive-maintenance) processes.

For corrective work, the planners and schedulers go to the storage area and check out several parts and then kit them for the mechanics, Hoffman said. Then jobs are reviewed with the mechanics. “The key here is to make our mechanics as successful as possible by giving them the right equipment, the right parts, and the right tools to maximize wrench time. This way, when they are out on the floor they have everything they need. It eliminates travel time back and forth and maximizes our ability to perform corrective work and keep our plant in a reliable state.”

When the mechanics receive a schedule, it determines the location of the kitting bin. The bins are numbered and lettered so the mechanic can easily find them and be prepared to successfully perform the job.

The planning and scheduling foundation translates across all North American facilities, Northcutt said. “If you look at it in its most simplistic terms, we plan it, we schedule it, we execute it,” he said. “As a company, throughout all facilities, planning and scheduling is what we hang our hat on.” 

Other best practices include using condition-based approaches and the previously referenced predictive technologies, i.e., thermography, ultrasound, and vibration analysis. Staffing and development is also important, said Richard Cole, director of maintenance and engineering at the Fayetteville, TN, facility.

“It is crucial to have the right people in the right place,” Cole said. “We are continuously developing their skills. We leverage local junior colleges and trade schools to bring in students as interns to work with the mechanics and get training. We have a strong focus around processes and systems, planning and scheduling, work orders, and predictive maintenance. We must always be looking at continuous improvement from scorecards and action plans. Reward and recognition also plays a role in our maintenance strategy.”

Knowing the score

To stay on track, Frito-Lay believes in knowing the score.

“We track our downtime performance here very closely,” Hoffman said. “We have the ability, through technology, to monitor our line performance almost to the minute. I challenge my managers and my mechanics to always know the score. It’s just like how a racecar driver knows what lap he is on, how much fuel he has left, and how much air is in the tires—he knows when to make a pit stop. You always have to know where you stand against the target you set.”

Maintenance planners Tim Waller, Don Reynolds, and Jeff Tuck take a break in the maintenance-parts room. Planning, scheduling, and kitting parts is a key component of the overall maintenance strategy at Frito-Lay.

Maintenance planners Tim Waller, Don Reynolds, and Jeff Tuck take a break in the maintenance-parts room. Planning, scheduling, and kitting parts is a key component of the overall maintenance strategy at Frito-Lay.

Frito-Lay’s key performance indicators (KPIs) include safety scores, such as the number of days the facility has gone injury free. They also measure total downtime, equipment downtime, operation downtime, changeovers, and material-related downtime.

“We have to have our house in order and provide a stable, safe work environment for our operators,” Hoffman said. With multiple changeovers, the quality could go south fast and our operators become extremely frustrated. If we hold our equipment reliability at the highest level, our operators have a very good chance to have a successful day. It allows them to focus on their quality metrics, how their line is running, and how we are holding our product to the highest standard. This is especially important when [it comes to] making food.”

Northcutt said anyone at any of the facilities in the U.S. and Canada can immediately see the metrics.

“I’m an old football coach, and I believe in knowing the score,” Northcutt noted. “Mechanics and those running the equipment from an operations perspective all know the score. This includes everything from planning and scheduling to inventory control to efficiency. Our ability to focus in on performance to improve performance makes us unique as an organization. On a weekly basis, the operations and technical teams come together to talk about outages or failures and then they step back and consider if it’s systemic or a piece of equipment. We call that ‘fix it forever.’”

Frito-Lay promotes internal competitions among facilities to inspire the operations and maintenance teams to keep score on key metrics. The company provides performance reports and ranks the various sites in different categories. There is a national downtime competition throughout the year that measures uptime and unplanned downtime. Winning teams are recognized through various company incentives.

“In this business, we like to know if we won,” Hoffman said. “If you don’t keep the numbers visible to the team, and if they don’t think it’s important to the leadership, their motivation will falter. Keeping the score is the greatest motivational technique we have in this business. Talk to any of my mechanics, they will tell you that I’m all about watching the downtime numbers with a goal of minimal downtime.”

The Fayetteville site’s Richard Cole pointed out that the friendly competitive challenges across facilities are motivational, but the teams also remember they are ultimately on the same side. Successful new processes and systems are shared across sites and the camaraderie that develops is strong. Support is given throughout the company, whether it’s hands-on, directional, or coaching to help personnel at all Frito-Lay sites improve performance.

Keeping up with new technology

Maintenance mechanic Fred Luther uses ultrasound technology as part of routine predictive maintenance.

Maintenance mechanic Fred Luther uses ultrasound technology as part of routine predictive maintenance.

Because Perry is the largest, most complex Frito-Lay facility, it has become the test site for new technology.

“If there is a new piece of equipment, we have very close contact with corporate engineering and our research-and-development team. They want to bring it here and let us try to help make it successful or let us cut our teeth on it and prove it before we deploy it to other facilities,” Hoffman said.

The Perry facility also has technically apt teams. “We are blessed with some of the most highly skilled maintenance and technology professionals in the company,” he added. “So we get all the new toys. It’s kind of cool. It challenges us.”

The teams go through rigorous training with the equipment vendors and supplement it with training at local technical schools. They also solicit other vendors and suppliers to provide training programs and classes on new technology.

Leveraging improvement, energy, and reliability

According to Hoffman, many different facets of continuous improvement are introduced at the Perry site and throughout all Frito-Lay operations. Through root-cause analysis, issues are engineered to avoid repeat failures, and improvement programs are launched to upgrade or harden pieces of equipment to increase reliability.

The team also troubleshoots how to reduce utility consumption while maintaining reliability. They study how to reduce parts costs and the overall cost of making the product.

“Our primary focus in the reliability business is just that…how do we become more reliable?” Hoffman said. “A lot of continuous improvement involves hot teams. So if there is an issue on the floor, for example, repeat failures, or if the operations team cannot get to the quality metrics they need, we will launch a hot team right there. Often cases involve managers, maintenance technicians, and operations professionals. We’ll brainstorm and come up with ideas, call outside vendors, and find some potential improvements.”

Palletizing robots prep product for distribution.

Palletizing robots prep product for distribution.

The focus becomes more than just reliability issues. Hot teams are also formed to solve issues that surround quality, safety, and operation optimization, to reduce the overall cost of the production.

“When you have a major failure or breakdown on a manufacturing line, everything sits there running,” Hoffman said. “You are still using gas to keep the ovens and fryers hot. Electricity is making all the other motors turn, but if you’re not making product, you’re just wasting utilities. If you have a reliable plant, inherently you improve your utility usage because you make product when you are supposed to.”

Frito-Lay supports other programs, including combustion tuning, minimizing fuel usage, and reducing utility consumption.

Production consistency

PepsiCo, Frito-Lay’s parent company, recently celebrated its 50th anniversary. The corporate arm keeps a keen eye on maintaining consistency throughout its processes, Northcutt said.

“Anytime you have a multi-plant environment, you have to have consistency,” Northcutt said. “A Lay’s potato chip made in California or Canada has to taste the same as the ones produced in Georgia. One thing we did well many years ago was rolling out and making sure everyone had the same tools, the same CMMS, the same inventory control, and the same purchasing process. We rolled out ultrasound as our primary condition-based tool. Consistency from one site to the other is something that becomes really important. We make sure to have consistent applications and then everyone is on the same playbook.” MT

Michelle Segrest has been a professional journalist for 27 years. She has covered the industrial processing industries for nine years and toured manufacturing facilities in 28 cities in six countries on three continents.

Frito-Lay Fayetteville Facility Earns Maintenance Excellence Award

The Foundation for Industrial Maintenance Excellence (FIME) organization is dedicated to the recognition of maintenance and reliability as a profession. FIME sponsors the North American Maintenance Excellence (NAME) Award, which is an annual program that recognizes North American organizations that excel in performing the maintenance process to enable operational excellence.

Frito-Lay’s Fayetteville, TN, site was the recipient of the prestigious award in 2011.

Jim Northcutt and Richard Cole were heavily involved in fulfilling the stringent requirements to achieve this honor.

“Jim and I are constantly looking outside of Frito-Lay to study industry trends and best maintenance and manufacturing practices,” Cole said. “It’s important to have opportunities to see what other companies are doing and research new technologies to bring back to the organization.”

Through the NAME Award process and also finding industry partners, including the Univ. of Tennessee Reliability and Maintenance Center and organizations such as SMRP, Frito-Lay has been able to connect with various colleagues to benchmark performance.

“We like to challenge ourselves to find out how good we can possibly be,” Cole said. “This benefits our own culture, as well as the entire American manufacturing culture.”

During the lengthy application and selection process for the NAME award, Cole worked closely with Northcutt at the corporate level to see how the Fayetteville site stacked up as a world-class manufacturing facility.

FIME sends four to five technical experts to assess the site in many different categories for a week. “They then give assessment and let you know how you perform and where you need to improve” Cole said. “Our processes, systems, teams, skills, and leadership hit this high level, so we were recognized for the award.”

Frito-Lay was then able to use the Fayetteville site as an example for its other facilities.

The objectives of the NAME Award, which was established in 1991 as a nonprofit, are to:

  • Increase the awareness of maintenance as a competitive edge in cost, quality, reliability, service, and equipment performance.
  • Identify industry leaders, along with potential or future leaders, and highlight best practices in maintenance management.
  • Share successful maintenance strategies and the benefits derived from implementation.
  • Understand the need for managing change and stages of development to achieve maintenance and reliability excellence.
  • Enable operational excellence.

Winners of the NAME award are site-specific. Some years there are no winners and some years there are two or three winners. It’s a rigorous process, but those who qualify earn the award.

233

3:38 am
March 18, 2016
Print Friendly

Monitoring Slow-Speed Bearings With Ultrasound

An ultrasound program associated with the critical coal-handling conveyor system at Dakota Gasification’s Great Plains Synfuels Plant is proving that catastrophic slow-speed bearing failures can be avoided.

An ultrasound program associated with the critical coal-handling conveyor system at Dakota Gasification’s Great Plains Synfuels Plant is proving that catastrophic slow-speed bearing failures can be avoided.

This maintenance professional’s account of his site’s experience with ultrasound technology lays out details that can help others.

By Ron Tangen, CMRP, Dakota Gasification Co.

Dakota Gasification Co. (DGC), a for-profit subsidiary of Basin Electric Power Cooperative, Bismarck, ND, owns and operates the Great Plains Synfuels Plant, a coal gasification complex near Beulah, ND. The plant produces pipeline-quality synthetic natural gas and related products.

It’s a given that coal gasification requires a conveyor system to transport coal. As others may have, Dakota Gasification has experienced its share of frustration with conveyor-bearing performance. An evaluation of the problem suggested that it was impossible to eliminate failures. With this in mind, we set a goal to simply minimize the occurrence of catastrophic slow-speed bearing (SSB) failures. This meant finding a technology that allowed us to accurately monitor bearing condition and maximize service life, as well as alert us when SSBs reached the end of their life, so we could remove them from the system.

DGC’s Coal Handling Operations group recognized the SSB issue several years earlier and, as a result, established a policy of weekly walk-downs of the conveyor system. During these walk-downs, technicians would perform a physical-senses evaluation of each of the main pulley bearings. At some point, a handheld infrared pyrometer was added to their toolbox to help identify failing bearings. Even though it operated close to the bottom of the P-F (potential failure) Curve, this strategy was better than no program, and some successes were realized. Still, we experienced two to four failures per year in a 400-bearing system.

Predictive maintenance (PdM) strategies operating this close to equipment failure inherently have some shortcomings—even when looking at SSBs. With our coal conveyors, the short window for failure detection is a key disadvantage. Bearings often don’t enter the “visual senses” portion of the failure curve until just days or hours prior to catastrophic failure. Operations personnel might perform inspections and report no problems, only to have a failure in the same week. Another disadvantage is the increased cost of catastrophic failure over planned replacement: safety aspects, manpower, production loss, and collateral damage costs far exceed bearing cost. Such issues helped justify transition to an ultrasound-based program.

Leveraging ultrasound for SSB monitoring, we’ve been able to move from one end of the P-F curve to the other—from reactive/failure to predictive maintenance. We would have considered ultrasound a success if it had provided even a few days or weeks of early detection. With this technology, however, we can now produce a report that identifies bearings at risk of failure as much as 12 months out.

Every five weeks, technicians on nine different routes collect data from the main pulley bearings in the coal-handling conveyor and then download it into ultrasound software for archiving, trending, and analysis.

Every five weeks, technicians on nine different routes collect data from the main pulley bearings in the coal-handling conveyor and then download it into ultrasound software for archiving, trending, and analysis.

Background

DGC had been using ultrasound for about 15 years in other applications. As we considered our SSB failure problem, the structure-borne nature of bearing applications led us to look into acoustical ultrasound. After trial testing the technology on our conveyor bearings, we conducted a hands-on field demonstration with our operations superintendent. This, we hoped, would be the key to securing high-level management buy-in that was so crucial to the program’s success.

Once the operations superintendent donned the ultrasound equipment and was able to “hear” a couple of bearings for himself, he began working his way down the conveyor galley to listen to the rest. Upon reaching the end of the galley, he directed us to implement the technology at DGC as “soon as possible.”

 The Great Plains Synfuels Plant, a commercial-scale coal gasification complex near Beulah, ND, produces pipeline-quality synthetic natural gas and related products.

The Great Plains Synfuels Plant, a commercial-scale coal gasification complex near Beulah, ND, produces pipeline-quality synthetic natural gas and related products.

Program details

Since our coal-handling system includes nine major buildings, we created nine ultrasound routes. Technicians from the operations group perform data collection every five weeks. There’s no magic to this interval: We simply wanted a short-enough frequency that would provide two or three readings within a failure cycle. Also, as operations personnel were accustomed to weekly conveyor-system walk-downs, a five-week interval would help gain program buy-in through reduced manpower and avoidance of the “monthly route” syndrome.

Data from routes (collected only on main pulley bearings, not idler bearings) are downloaded into the ultrasound software for archiving, trending, and analysis. Two key elements of the data are the recorded ultrasound wave file and bearing decibel (dB) value. The wave file contains several seconds of digitally recorded sound; the dB value represents its intensity. Without these elements, analysis wouldn’t be possible.

Inner-race spalling in a failed bearing.

Inner-race spalling in a failed bearing.

Analysis process

Several features of our ultrasound tools have been particularly helpful.

Our ultraprobe data collectors allow us to digitally record several seconds of wave-file (sound) data. In addition, a high level of ultraprobe sensitivity allows us to detect small failure modes, i.e., we can track a bearing’s health through its entire life. For example, many bearings are classified as having a “zero-dB fault”—meaning a cyclical fault inside them can clearly be heard (and seen), while the ultrasound equipment is registering a 0-dB sound level (Fig. 1) Although the bearing reflected in Fig. 1 has a fault, it may be years, or even decades, from failure. Typically, we don’t consider replacing a bearing until the sound level is in the 25-to-30-dB range.

Fig. 1. Zero dB fault.

Fig. 1. Zero dB fault.

The ultrasound software lets us replay wave files to interpret sound signatures and to analyze dB-value changes over time. This helps establish trend history and future risk of failure. The software also helps us see the amplitude and pattern of a sound—which is a valuable capability when analyzing bearing health.

One challenge that comes with high sensitivity is the presence of competing (structure-borne) ultrasound sources that might be heard along with the bearing sound signature. Competing sources can come from coal (product) falling onto belts or through metal chutes, gearbox noise, or a nearby bad idler bearing. Listening to the sound of bearings, what you actually hear are impacting and/or white-noise frictional forces.

Impacting is short-duration frictional forces caused by failure modes such as particle contamination, pitting, spalling, fretting, or broken parts, i.e., a cracked race.

White noise is caused by constant frictional forces. Even “good” bearings have some white noise, i.e., all have some level of constant frictional force acting on them as they rotate. Elevated levels could be a result of new/tight bearings, high dynamic loading, inadequate lubrication, or misalignment.

Fig. 2. Classification is ‘OK.’

Fig. 2. Classification is ‘OK.’

 

Fig. 3. Classification is ‘Moderate Impacting.’

Fig. 3. Classification is ‘Moderate Impacting.’

 

Fig. 4. Classification is ‘Moderate White Noise.’

Fig. 4. Classification is ‘Moderate White Noise.’

The recorded wave file is transferred to the software for analysis. This is where the bearing’s signature is established. Each wave file has a unique signature relating to the bearing condition. To simplify things, we evaluate this signature in terms of its impacting and white noise. While signatures may clearly be dominant in one way, a bearing typically reflects a mix of impacting and white noise. Determining how much and what level of each helps establish the bearing’s overall health (Figs. 2, 3, 4). This signature and the dB value of the bearing provide insight into the level of deterioration.

Since our bearings operate at speeds between 70 and 80 rpm, FFT (Fast Fourier Transform) analysis isn’t feasible. As a result, all analysis on these SSBs is done through our ultrasound equipment’s time series analysis software and interpretation of trended decibel values.

Analyzing SSBs with ultrasound allows us to accurately monitor bearing health over the component’s entire life. This, in turn, allows time to recognize a bearing that’s nearing the end of its useful life or catch one that’s moving into catastrophic failure.

Decibel values charted within the ultrasound software provide important information regarding a bearing’s historical and current conditions—and can even be used to gain insight into the future failure risk. Since individual data points seldom plot in a straight line, “normalizing” the information by drawing a straight line through it provides a more linear perspective in establishing a bearing’s historical and projected trends. The line’s slope represents the component’s rate of failure:

  • A slowly rising slope indicates a bearing that’s deteriorating/failing slowly.
  • A rapidly rising slope indicates a bearing that’s deteriorating/failing rapidly.

Extending the historical trend line into the future allows users to anticipate future bearing health and risk of failure (Fig. 5). The intersection of this line with the established target decibel level for bearing replacements (30 dB in Fig. 5) can provide good clues as to remaining operating life and the timeframe available for a planned replacement.

Decibel values charted within the ultrasound software provide important information regarding a bearing’s historical and current conditions. Extending the historical trend line into the future allows users to anticipate future bearing health and risk of failure. The line’s slope represents the component’s rate of failure: The intersection of this line with the established target decibel level for bearing replacements (30 dB) can provide good clues as to remaining operating life and the timeframe available for a planned replacement. Typically, DGC doesn’t consider replacing a bearing until the sound level is in the 25-to-30-dB range.

Decibel values charted within the ultrasound software provide important information regarding a bearing’s historical and current conditions. Extending the historical trend line into the future allows users to anticipate future bearing health and risk of failure. The line’s slope represents the component’s rate of failure: The intersection of this line with the established target decibel level for bearing replacements (30 dB) can provide good clues as to remaining operating life and the timeframe available for a planned replacement. Typically, DGC doesn’t consider replacing a bearing until the sound level is in the 25-to-30-dB range.

Based on analysis of the sound signature and recorded dB level, the bearing can be classified, or graded, regarding its condition. To maintain consistency in our grading process, we developed a failure-classification chart. This chart has been periodically updated over the years to align with the actual deterioration in bearings we’ve removed from the field. Moreover, it only applies to DGC’s conveyor bearings—mostly 3- to 4-in.-dia. spherical roller designs from one manufacturer. The bearing-classification information is also documented in the ultrasound software for purposes of reference and reporting.

Rolling-element spalling.

Rolling-element spalling.

Value-added insight

Our ultrasound program has provided some value-added insights beyond what we had expected from the technology. Examples include helping track new bearing “wear-in.” We originally anticipated that when a “bad” bearing was replaced, the decibel reading on the “good” bearing would be significantly lower, i.e., close to a normal operational baseline. In fact, after pulling 25- to 30-dB impacting bearings from the conveyor system, we’ve discovered 25- to 30-dB white-noise signatures on their replacements.

Although manufacturers suggest a bearing will wear in over a few days or weeks, ultrasound has shown us a few months to years may be more likely. This isn’t to say OEMs are wrong, but rather to point out ultrasound’s high degree of sensitivity to frictional activity within bearings.

This graph represents what the author has come to identify as a “typical” ultrasound bearing life cycle. It shows an upward slope as the bearing goes into “failure,” and a downward sloping bearing wear-in period as it returns to baseline. In this example, he manually inserted two ‘markers’ into the trend data: one to represent when the bearing was replaced, and another to establish the anticipated new baseline

This graph represents what the author has come to identify as a “typical” ultrasound bearing life cycle. It shows an upward slope as the bearing goes into “failure,” and a downward sloping bearing wear-in period as it returns to baseline. In this example, he manually inserted two ‘markers’ into the trend data: one to represent when the bearing was replaced, and another to establish the anticipated new baseline

DGC’s ultrasound experience also suggests correctly lubricated SSBs can survive failure modes that quickly fail bearings in high-speed applications. Some of our SSBs have operated with a cracked race for several years. One that we removed and analyzed had significant spalling on the inner race and rolling elements; its outer race had a chip, crack, and break.

A failed bearing with an outer-race crack, chip, and break.

A failed bearing with an outer-race crack, chip, and break.

Ultrasound has also provided insight on the ability of SSBs to recover from certain levels of failure modes. In one case, we saw a bearing decibel trend move, over a 3-yr. period, into accelerated failure and recovery four times. At first glance, the random and inconsistent trend seemed to point to a data-collection problem. A closer look, however, indicated the bearing actually went through insipient failure and recovery. In the end, as we normalized the data, we were still able to establish an overall failure rate.

Despite some challenges since its implementation, no one has said the slow-speed bearing ultrasound program isn’t worth the effort or is not delivering value for the site’s owner/operator Dakota Gasification Co.

Despite some challenges since its implementation, no one has said the slow-speed bearing ultrasound program isn’t worth the effort or is not delivering value for the site’s owner/operator Dakota Gasification Co.

Where we are

Although we at DGC still have much to learn about ultrasound, I’m pleased with the progress of our SSB monitoring program. Despite some challenging situations, no one has said the program isn’t worth the effort or not delivering value to our company. Just as important is the fact that managers are now requesting quarterly predictions of high-risk bearings rather than a single annual report.

To date, we don’t have a calculated statistical level of improvement for the program, but, the number of visibly damaged components in our bearing showcase continues to grow every year. Each of these bearings represents a catastrophic failure that was avoided—and dollars added to DGC’s bottom line. MT

Ron Tangen is a maintenance-engineering specialist for Dakota Gasification Co. (dakotagas.com), a for-profit subsidiary of Basin Electric Power Cooperative (basinelectric.com). A Certified Maintenance and Reliability Professional (CMRP), Tangen is based at the company’s Great Plains Synfuels Plant, a commercial-scale coal gasification complex near Beulah, ND. For more information on this article, email him at rtangen@bepc.com.

323

8:25 pm
January 12, 2016
Print Friendly

Use Infrared to Detect Underground Leaks

Suspect a buried leak? Thermography can help you find it.

By James Seffrin, Director, Infraspection Institute

Leaks are a common problem with underground piping systems. Under the correct conditions, infrared thermography can help you detect evidence of leaks from buried systems that carry hot or cold product.

1601mrcinfrared01ap

1601mrcinfrared01bp

This set of images shows a thermal pattern created by a hot-water line buried beneath a street along a concrete curb. The large, amorphous shape at the center was caused by an underground leak at an expansion loop. Photo: Infraspection Institute

When a leak develops in a buried piping system, be it underground or within a concrete slab, fluid is lost to the surroundings. If a leak from a piping system that carries heated or cooled fluid is sufficiently large, a temperature change may occur at the surface of the ground or concrete in the vicinity of the leak.

Leaks from buried piping are generally characterized by amorphously shaped thermal anomalies that appear along the system pathway. The ability to detect a pipe leak will be influenced by interdependent factors including, but not limited to:

  • pipe operating temperature
  • pipe-system construction
  • burial depth
  • amount of loss
  • soil type and moisture content
  • ground cover.

1601mrcinfrared02ap

1601mrcinfrared02bp

Excavation of the exception area indicated by the hot-water line thermographic imagery confirmed that a leak in the continuous-jacket piping system had occurred in one leg of an expansion loop. In this case, infrared inspection allowed the repair team to zero in on the problem and keep excavation to a minimum. Photo: Infraspection Institute

Infrared inspections of buried piping systems located outdoors are best performed late at night with calm wind conditions. Such inspections may be performed on foot, from a motor vehicle, or from an aircraft. Late-night inspections eliminate the effects of solar loading and solar reflection. Note, however, that infrared inspections of indoor piping systems may be performed at any time of the day.

During an inspection, the thermal imager is maneuvered over the pipeline pathway. Well-defined straight lines that correspond to the location of the buried lines generally indicate a healthy piping system. Amorphously shaped thermal anomalies that can’t be explained in terms of piping-system construction or features may indicate leaks and should be marked and subsequently investigated for cause. MT

Jim Seffrin, a practicing thermographer with 30+ years of experience in the field, was appointed to the position of Director of Infraspection Institute in 2000. This article is based on one of his “Tip of the Week” posts on IRINFO.org. For more information on detecting underground-piping leaks and countless other infrared applications, as well as various upcoming training and certification opportunities, email jim@infraspection.com or visit infraspection.com.

346

3:54 pm
November 25, 2015
Print Friendly

Schneider Electric Furthers IIoT Evolution

“The Industrial Internet of Things [IIoT] is an evolution, not a revolution,” was the lead statement at the Schneider Electric SPS Nuremberg show in Nuremberg, Germany, Nov. 24, 2015. To support that claim, Clemens Blum, Schneider’s executive vice president of industry business, referred to the description of a Schneider 1999 Computerworld Smithsonian Award that talked about connected products and systems that operate as part of a larger system of systems and smart plants and machines, with embedded intelligence, that are integrated to enable the smart enterprise, improve efficiency and profitability, increase cyber security, and improve safety.

Marketing director of machine solutions, Rainer Beudert, followed with a discussion of smart machines and how they fit in the evolving IIoT. He described the Schneider definition of a smart machine as one that intuitively interacts with operators; assists with predictive maintenance; minimizes its environmental footprint; and provides modularity, connectivity, plug and work setup, self awareness, reusable design, digital mobility, and data management. It also makes available information about status, configuration, conditions, quality, and features.

To learn more about what Schneider is doing to further the IIoT evolution, view the press-conference video at: https://www.youtube.com/watch?v=WI7JnKh3eV0

578

3:03 am
November 16, 2015
Print Friendly

Consider The Common Cause Method

What appear to be stand-alone issues in different areas of a plant may actually have some things in common. Rooting out those factors is the only way to prevent such problems from recurring.

By Randall Noon, P.E.

Common-cause analysis is a relatively straightforward root-cause method. Sometimes referred to as “determining the cause of causes,” it’s used to identify an often latent, overarching causal factor creating problems in various places or departments within an organization. Occasionally, what appears to be a stand-alone problem in one area results from the same factor that is creating what appear to be stand-alone problems in another area (or areas).

Logically, the fact that the same basic problem is occurring in different departments is a significant clue that the culpable causal factor is something shared or otherwise common to the affected departments. This is akin to two strangers becoming similarly sick, one in the morning and one in the afternoon, by drinking from the same contaminated well. A doctor concludes each has ingested a high amount of unfavorable Escherichia coli. Unfortunately, while this diagnosis is accurate in both cases, people will continue to become ill until the common cause—the contaminated well—is identified and addressed.

Two organizational characteristics frequently prevent ready identification of a common causal factor:

  • The tendency of a department’s administration to solve its own problems and resist outside help. This might be due to pride, e.g., “we solve our own problems in this department, thank you very much.” Or it might be related to a perception of competency, as in “real managers don’t need help to put their own houses in order.”
  • The tendency of a department’s administration to mind its own business. This might be due to nothing more than lack of familiarity with another department’s day-to-day problems. Or it could be associated with a perception by some managers that another department head might be meddling or encroaching upon their territories.

In any case, the result is the same. The problem is solved as best as possible within the administrative envelope and resource limitations of the affected department. If, however, the causal factor originates outside the department’s envelope, only the symptoms occurring within the affected area are addressed. The fundamental deficiency originating outside the department’s envelope remains in place. Alas, much like a contaminated well, the common cause continues to create problems.

Not-too-hypothetical example

The following example of a problem solved by common-cause analysis demonstrates how the method can be applied.

Tom, Rick, and Sherry make up a three-person department tasked with writing procedures, instructions, and work orders for the entire company. In the past year, there were several instances in different areas of the operation where infrequently performed work wasn’t done correctly. These situations resulted in significant rework and, on occasion, regulatory problems.

Each affected department investigated its own problem and determined that the cause was a type of human-performance error. One department blamed non-compliance of workers with the given procedure. Another attributed its issue to a lack of self-checking by personnel. A third department questioned the competency of the supervisor in charge of the problematic work—which led to the inclusion of a career-limiting letter in his file for not providing sufficient oversight.

In the sports world, when a basketball goes out of bounds during play, the referee establishes fault based on which team touched it last. A similar rule appears to have been applied in this workplace example: In each case, the last person to touch the work was blamed for the work “going out of bounds.” While this is a simple way to determine fault, it generally doesn’t prevent recurrence of the problem.

Consequently, after operations in this not-too-hypothetical example experienced a larger than expected number of “human-performance errors” over the course of a year, a third party in the company undertook a common-cause analysis. Individual causal analysis reports that documented each department’s investigation—15 in all—were gathered and reviewed as a group to determine any commonalities. The first-cut review of these reports identified the following common factors:

  • In 6 of 15 cases, electricians performed the work.
  • In 8 of 15 cases, mechanics performed the work.
  • In 1 of 15 cases, a welder performed the work.
  • All 15 cases involved reading a document and then executing the work instructions contained in the document.
  • All 15 cases involved work tasks that were performed infrequently, and by crews often composed of people who had not done the work before. Thus, workers could not depend upon memory.
  • All 15 of the cases had supervisory oversight and all of the workers had pre-job briefings. Three supervisors were involved in all 15 cases.

In critically assessing these first-cut findings, with respect to original departmental conclusions, the third-party reviewer made the following conclusions:

— Procedure non-compliance, which was cited as the primary causal factor in several cases, is a fact—not a cause. It does not explain why the workers did not comply with the procedure. The fact that there were 15 similar instances of non-compliance among a variety of workers is incongruous with their otherwise good work records. In logic, this type of conclusion is called “affirming the antecedent.” The conclusion that the procedure wasn’t followed follows readily from the initial problem statement that the work wasn’t done correctly. No investigation was needed to figure that out. Consequently, the conclusion provided nothing useful going forward to prevent recurrence.

— A lack of self-checking, which was cited as a primary causal factor in several cases, is fundamentally an assumption. The line of thought assumes that, had the workers self-checked “properly,” mistakes would not have been made. There are two deficiencies in this reasoning:

  • The first is the assumption that self-checking will catch errors. More often than not, this is not true. Various studies indicate the probability that a person who made the error will then detect it during a self-check review is too low to be considered a substantive error-prevention method. For example, an experienced proofreader will often catch simple grammar and spelling errors that the author has made despite the author having personally self-checked the document several times. Likewise, the common spell-check feature embedded in most word-processing programs regularly demonstrates that self-checking of spelling doesn’t work very well.
  • The second deficiency is that finding errors based upon self-checking requires that the standard against which the check is being made is correct. For example, a person who believes that connecting a blue wire to lug A is the right thing to do, will not detect the fact that actually the red wire should be connected to lug A. He can self-check the connection many times, but as long as he believes the blue wire should be connected to lug A, the error will be repeated. A bit of further consideration notes that the conclusion of a lack of self-checking when a procedure is involved is no different than the conclusion of procedure non-compliance. This is another variant of affirming the antecedent.
  • Lack of oversight was cited in one case. The supervisor and workers in this situation reviewed the procedure prior to starting it and all agreed how the work should be done. The supervisor, however, was held accountable because he had been in charge of the work. A finding of accountability is not generally synonymous with a finding of cause. Further, assuming that the group of 15 events may have had a common causal factor, a lack of oversight in one case doesn’t explain any of the other 14 instances.

Importance of the written word

To determine if there was something about the written procedures, instructions, and work orders that played a role in inducing errors, the various documents involved in the events were vetted. First the authorship of each document was determined. While none of the 15 reports involved documents written by Tom, two documents written by Sherry and 13 written by Rick were involved.

Since Tom, Rick, and Sherry more or less shared writing duties equally, if none of the documents were involved, it would be expected that 5 +/- x documents from each writer would be involved, where x might be 1 or 2. Since 13 is 2.6 times the expected average if the documents were not involved, this was a clear line of inquiry to follow.

An examination of the writing process found that all three writers were meticulous in following the established protocols for checking content, spelling, and punctuation. (Note: At this point, to shorten the story a bit and quickly report the findings, some of the investigative steps have been omitted.)

Eventually, it was noted that Rick was consistently writing documents that were determined to be at the 12th grade level, or higher, as measured quantitatively using the Flesch-Kincaid readability test. Sherry was writing consistently at the 9th to 10th grade level, and Tom was writing at the 5th to 7th grade level.

Moreover, Rick had another writing trait that contributed to misinterpretation. He regularly used the connective, “and/or.” The other two writers did not use the “and/or” connective. In some cases, “and/or” had the meaning of “do both A and B.” In other cases, the person doing the work could choose which action to take, as in “You can do A, or you can do B.” Therefore, the simple sentence “You can drill a dimple in the shaft for a set-screw to affix the pinion gear on the shaft and/or you can use a retaining wire around the pinion gear” could be interpreted to mean:

  • Drill a dimple and use a set-screw, and install the retaining wire on the pinion gear. That is, do both.
  • Drill a dimple and use a set-screw, but you don’t have to install the wire on the pinion gear.
  • Install the wire on the pinion gear, but you don’t have to drill a dimple and use a set-screw.

Thus, a hypothesis was put forward that Rick’s writing style was the common cause. Two factors apparently made Rick’s documents more prone to execution errors:

  • They were more difficult to read and interpret.
  • Use of the connective “and/or” caused confusion.

Still, a question remained: Since all three writers were required to have a second party read their work for content, why wasn’t this problem detected prior to implementation?

The answer was that Rick, an engineer, gave his work to other engineers to review. Those engineers did not notice that the reading level was at the 12th grade or higher. They were used to material written at that reading level. On the other hand, Tom and Sherry gave their work to supervisors in the electrical and mechanical departments who had come up through the ranks.

Also, the engineers that reviewed Rick’s documents could determine which alternative was indicated by the “and/or” by context, and often used that connective in their own writing. Hence, the protocol for checking documents was found to be deficient in that it didn’t require having a person from the same audience that would use the document to review it.

Given the fact that Tom and Sherry had been with the company for many years, a further test of the hypothesis would be that prior to Rick’s documents being used to do work, the number of procedure compliance problems would be quantifiably less. And, yes, prior to Rick’s work being implemented, the number of similar human-performance problems was found to be lower. Subsequently, the problem was fixed by:

  • Instructing Rick to write shorter, less complex sentences and to check his work against the Flesch-Kincaid readability index.
  • Directing Rick to discontinue using the “and/or” connective.
  • Revising the document review protocol to require that a member of the audience who would actually use the document review it for content, spelling, and punctuation.

Bottom-line

Keep in mind that the conclusions reached in the departmental causal factor investigations in this example did not determine why otherwise good workers were making incorrect decisions. The common-cause method, however, did uncover factors that contributed to the making of those decisions. MT

Randy Noon is a Root Cause Team leader at Nebraska’s Cooper Nuclear Station near Brownville, NE. A licensed professional engineer in the United States and Canada, this noted author and frequent contributor to Maintenance Technology has been investigating failures for more than three decades. Contact him at rknoon@nppd.com.

To learn more from Randy Noon and other industry experts regarding various problem-investigation issues, approaches, and techniques, see:

“The Scientific Method”

“The Shortest Distance Between Success and Failure”

“Finding the Root Cause Isn’t Always the Solution”

“Why Some Root-Cause Investigations Don’t Prevent Recurrence”

“MTBF > MTBM”

“Detection of Cooling Water Intrusion into Standby-Power Diesel Engines”

“Failure Analysis of Machine Shafts”

“Get to the Root Cause”

2145

7:35 pm
December 1, 2014
Print Friendly

My Take: Connecting Your Enterprise — Leveraging the Internet of Things

1014janemytakeBy Jane Alexander, Managing Editor

This month’s cover asks “What’s Trending Now?” My take is that the “Internet of Things (IoT)” is one of the hottest topics out there, especially among the suppliers of technologies for consumer, commercial and industrial applications. Alas, among industrial end-users, the IoT may be one of the least understood trends. (According to LNS Research, almost half of industry executives still don’t understand it.) Troubled by this report, I turned to Opto 22 to sum up the IoT and its benefits for our readers.

In short, the term “Internet of Things” describes how more than just computers and phones can be connected to the Internet. Practically any electrical or mechanical device, sensor or system can be connected. These “smart devices” possess enough intelligence to connect to a network and exchange data with computers and other smart devices. Although, millions of such devices are estimated to currently be in place worldwide, that number is expected to swell to billions as many more come online within the next 10 to 15 years.

As most of us have seen, IoT benefits are often promoted in terms of consumer applications (i.e., monitoring and controlling household lighting, heating, cooling and security, or tracking personal health and fitness parameters, including monitoring vital signs with devices worn on the body). But the big news is really about the substantial data that connected smart devices can generate, regardless of application—and the valuable, actionable information this data can yield.

According to Opto 22 Vice President Benson Hougland, while consumer IoT applications have attracted significant attention, the majority of growth in connected devices is occurring in industrial and commercial applications. That, in turn, has sparked the term “Industrial Internet of Things (IIoT).”

But, then, what’s so new about monitoring and controlling equipment in commercial and industrial settings? Automation is a long-established discipline. SCADA and other control systems have existed for decades, right? How do IIoT technologies differ from control and remote monitoring systems already in use across plants and facilities?

As Hougland explains, one difference is the devices themselves. “Smart devices can connect to enterprise networks using Internet Protocol (IP), which is not usually the case for machines connected on a fieldbus or other proprietary network,” he notes. Furthermore, these devices also exchange data using widely adopted protocols from the IT world without an intermediary system required to translate between different protocols. IIoT sensors are small, low-power and often wireless, so an existing automation system can be instrumented and data-collected without affecting its operation or putting production at risk.

“We declared IP the ‘dial tone’ of the Internet when we launched industrial Ethernet-based products 14 years ago,” Hougland continues. “In other words, communication can’t occur without it, and that’s the same for the Internet of Things today.”

The IIoT clearly presents end-users with opportunities to improve production efficiency and operational performance. As a maintenance and reliability pro, you’re no doubt familiar with IIoT applications in the areas of predictive/preventive maintenance. Still, Hougland says, as more smart devices emerge, the IIoT will augment more complex applications like integrated automation, condition-monitoring and asset-management systems.

All this begs yet another question: What’s your take on the IoT/IIoT? (Leveraging this phenomenon involves more than technology: What about the human element, for example?) We’ve already heard from almost 300 readers through a recent survey. Please turn to page 20 to see what we learned.

For now, we at Maintenance Technology thank you for your support in 2014, and look forward to serving you in a very Happy, Prosperous New Year and beyond! MT

jalexander@maintenancetechnology.com

2200

7:06 pm
November 4, 2014
Print Friendly

Preventing Impending Equipment Failure Earlier

solutionspotAdvanced-pattern-recognition technology adds a powerful predictive component to your reliability toolbox.

By Jane Alexander, Managing Editor

Improving reliability metrics has long been among the top priorities of plants and other physical asset-intensive organizations. Plant managers, engineers and technicians continually work to ensure that equipment will efficiently operate for as long as safely possible.

Proper maintenance plays a significant role in asset performance and reliability. Plants today incorporate a combination of maintenance techniques to obtain the best return from each asset. Adding a predictive component to a comprehensive strategy can uncover issues that other maintenance techniques may not achieve on their own, leading to even greater reliability improvements. According to InStep Software (InStep), its PRiSM predictive asset analytics tool is that type of component.

PRiSM uses advanced pattern-recognition (APR) to derive predictions from empirical models generated by “learning” from an asset’s unique operating history during all ambient and process conditions. The model effectively becomes the baseline to determine the normal operational profile for a piece of equipment.  While modeling can be a complicated task, PRiSM is designed to simplify and streamline the process, allowing models to be created in minutes, rather than days or weeks.

By comparing an asset’s unique operational profile with real-time operating data, PRiSM can detect subtle changes in system behavior that are often the early warning signs of impending equipment failure. Engineers and operators are alerted well before the deviating variables reach standard alarm levels, creating more time for analysis and planning any corrective action. Once an issue has been identified, the software can provide root cause analysis and fault diagnostics to help the plant engineer understand the source of the issue and how to proactively address the problem. Diagnostic technology lessens the likelihood that abnormal operating conditions will be attributed to the wrong variable.

PRiSM doesn’t require special or additional sensors. Instead, the software relies on existing machinery sensor data, both historical and real-time (historical data typically resides in the plant historian), for input into the modeling and predictive process. It works with all types of equipment and equipment manufacturers.

1115f5-1

Sample PRiSM actual vs. predictive model

Providing actionable insight

InStep notes that in addition to helping enhance equipment reliability, PRiSM provides a range of other benefits. Unscheduled downtime can be reduced because plant operators receive early warning notifications of incipient issues, thus avoiding potential asset failures. Rather than having to shut the plant down immediately, engineers can assess a problematic situation and possibly schedule maintenance for a more convenient time. Maintenance costs are then reduced due to better planning; parts can be ordered and shipped without rush and equipment can continue running.

Some suggested maintenance windows can be lengthened as determined by equipment condition and performance. The use of predictive analytics software also gives plants the ability to identify underperforming assets, increase asset utilization and extend equipment life. Other benefits are realized when considering the costs that “could have been,” including replacement equipment, lost productivity, additional man hours, etc., when a major failure is avoided. With this actionable information, plants can achieve increased equipment reliability, availability, capacity and performance.

Generating real-world payback

PRiSM’s performance in real-world plant situations is well documented.  In one example, soon after implementing the software in its operations, a large nuclear power-generation company identified a significant fault. PRiSM determined that oil temperatures of a condensate-pump-motor bearing weren’t within the normal range defined by the multi-dimensional model. The cause was an improperly assembled coupling that was seizing and approaching mechanical failure. If the issue had gone undetected, the coupling problem would have resulted in damage to both the motor and the pump—and an associated replacement time of four to six weeks. Replacement cost, expediting fees and craft overtime was estimated at $700,000, with the probability of this failure estimated at 0.70 or $490,000.

In another instance, a large North American electric power company was able to avoid a potentially disastrous failure because of an early warning notification provided by PRiSM. Site personnel were alerted of a vibration step change on a steam turbine that had been previously operating normally. Plant personnel verified that a proximity probe and casing vibration had both changed. Further analysis indicated a likely loss of mass in the turbine blade path. The utility immediately suspected shroud material had been lost, based on the unit’s history. It was determined that the unit could continue to run at a reduced output, under increased observation, until a more convenient and strategic time to bring it off-line. Once it was brought off-line, a borescope inspection verified missing shroud material and several other segments that were close to liberating.

Had this issue not been identified through APR vibration modeling, it could have caused immediate unplanned downtime, loss of generation, possible catastrophic failure and danger to personnel. The change was not significant enough to alert the operations staff of this impending condition via normal monitoring practices. It was determined that use of PRiSM and PdM protocols was the reason for this positive outcome, which resulted in a potential savings of millions of dollars in lost revenue and increased repair costs, in addition to maintaining the safety of the operating engineers. MT

UPDATE: Schneider Electric to Acquire InStep Software

Schneider Electric (Schneider-Electric.com/us) recently announced that it has entered into an agreement to acquire Chicago, IL-based InStep Software (instepsoftware.com), the provider of PRiSM and other real-time performance-management and predictive asset-analytics software and solutions.

According to the two companies, InStep’s eDNA, PRiSM and EBS software offerings will be core to Schneider Electric’s future strategy in data management, predictive asset analytics and energy management. The acquisition strengthens Schneider Electric’s reputation as an industry game changer and fits within the company’s strategy of improving and expanding its product offerings for the global power and energy market.

“InStep expands our capabilities and presence within the power and energy management markets, particularly in the area of information management, which includes process history, reporting and analysis,” said Rob McGreevy, Vice President of Information, Operations and Asset Management for Schneider Electric. “InStep provides additional capabilities in predictive analytics as well. Today, much of the analytics business is within the power industry and pertains to assets, but we expect that to expand to other industries. Another benefit, of course, is that through InStep, we’re adding an excellent management team and some highly experienced employees to our software team, which will certainly help us created additional value for our customers.”

InStep’s PRiSM software, specifically, is expected to help Schneider Electric fulfill strategic plans around Big Data, the Internet of Things and other emerging trends. Appropriate future integration between PRiSM and Schneider Electric’s suite of Avantis products is also envisioned.

Navigation