Archive | Featured

262

6:55 pm
September 12, 2016
Print Friendly

Old-School Approach to New-World Technology

Colorado window and door manufacturer creates sophisticated, energy-efficient products with tried-and-true maintenance and operations best practices.

All of Alpen’s fiberglass insulated frames are custom made in various shapes, sizes, and with a variety of operational moving parts.

All of Alpen’s fiberglass insulated frames are custom made in various shapes, sizes, and with a variety of operational moving parts.

Continue Reading →

1212

5:20 am
March 18, 2016
Print Friendly

“Pit Crews” Keep Snacks On Track

Cheetos snacks move through an accumulation conveyor at the Perry, GA, Frito-Lay manufacturing facility.

Cheetos snacks move through an accumulation conveyor at the Perry, GA, Frito-Lay manufacturing facility.

High-performance machines require highly skilled professionals who use a race-car team approach to maintenance and reliability at Frito-Lay’s largest North American manufacturing facility.

By Michelle Segrest,  Contributing Editor

Lay's potato chips move up the potato chip incline conveyor to seasoning.

Lay’s potato chips move up the potato chip incline conveyor to seasoning.

The one million-sq.-ft. Frito-Lay manufacturing facility in Perry, GA, operates like a well-oiled, high-speed race-car track.

The operations teams drive the machines, but it’s the 100 maintenance professionals on five specialized teams who work in the garage and in the pits to build, repair, and optimize the equipment—taking it from the shop to the track. They ensure the production stays in constant motion as it circles the refined Frito-Lay course, around-and-around, nonstop, 24/7.

Perry’s director of maintenance and engineering, Craig Hoffman, is the crew chief. The overall maintenance philosophy requires proactive maintenance and methodologies, he said. However, just like a race-team pit crew, they must have the ability to respond to unexpected issues.

“NASCAR teams spend a lot of time in their shops building their cars, analyzing, making adjustments, and fixing problems. We use similar techniques,” Hoffman said. “Our foundation is planning and scheduling, which is supported by preventive and predictive maintenance and root-cause analysis. We do everything we can to make sure our equipment is ready to perform.”

In a facility that produces thousands of pounds of potato chips, tortilla chips, and many other Frito-Lay products per hour, the equipment must stay in optimal condition to deliver high-performance production, he said.

“Our job is to turn the equipment over, in the best possible shape, to the operations group. But every race day there is a situation where you have to respond. When something happens, we go into the pit-crew mentality—it’s all hands on deck. What is constantly on our minds is how to keep our equipment in safe, reliable, food-safe condition so that the drivers can continue to move the lines around the track. We do a great job upfront with our proactive technologies. I would love to say we are perfect. When, however, you have as much equipment as we do, something is going to happen. And we have to be able to respond.” 

The different teams play different roles, yet all share a common goal: to produce millions of pounds of snack foods annually.

The Perry facility houses 15 manufacturing lines that produce all flavor varieties of Frito-Lay snacks, including Doritos, Cheetos, Tostitos, Ruffles, Lay’s, Fritos, SunChips, Stacy’s, Smartfood, Rold Gold, and Funyuns. Built in 1988 with just two lines, the largest of Frito-Lay’s 36 North American manufacturing facilities has built several expansions in nearly three decades, including three production lines in the past 14 months.

Maintenance philosophy

Doritos nacho-cheese-flavored chips travel through the distribution system to packaging. Photos: Michelle Segrest.

Doritos nacho-cheese-flavored chips travel through the distribution system to packaging. Photos: Michelle Segrest.

Hoffman’s team is responsible for the maintenance of countless pieces of equipment, including fryers, ovens, extruders, a fleet of automated vehicles (including cranes and robots), weighers, kettles, pumps, motors, instrumentation, packaging equipment, seasoning-application equipment, boilers, air compressors, air dryers, switch gears, bag-packaging tubes, and several miles of conveyors throughout the facility.

The site’s maintenance professionals are divided into five teams that cover all facets of the facility:

  • core plant – includes all of the machines that manufacture, package, and process the larger, core products such as Lay’s and Doritos
  • bakery area – manufactures, packages, and processes baked products
  • facility – handles buildings, grounds, infrastructure, boilers, compressors, steam system, and other related equipment
  • warehouse – takes care of the shipping and distribution equipment, and all palletizing equipment, robots, and cranes
  • controls – manages the controls infrastructure, all operator interface terminals, PLC programming, and IT systems.

Hoffman teaches planning classes to all Frito-Lay employees. “I always cite the example of changing oil in the car,” he said. “Most people tell you put the car up on blocks, drain the old oil, then put in the new oil. When I change the oil, I go into my shop first and make sure I have the oil filter. I make sure I have the oil. I make sure my jack is in good condition, and I have jack stands for safety. Then I make sure it is time to change the oil. A lot of people tear right into a project without having the right parts or the right information to do the job. To me, this is all about planning.”

“Another example is when you go on vacation,” Hoffman said. “I don’t know anyone who just wakes up one morning and says, ‘I’m out of here.’ You plan the vacation. You decide where you are going to go, what you are going to do, where you will stay. You buy tickets. You put a plan together before you go tackle that vacation, just like we would put a plan together before we would tackle any job. We are making sure we have the right parts, the right information, and the right tools to go execute good work.”

The work comes from the facility’s PM (preventive maintenance) system. Operators provide insight on how their machines are running. Then the maintenance team maps out a plan to restore the equipment to the optimal operating condition. When the plan is set, they schedule and execute it. “If you don’t have a plan, you have no control. If you fail to plan, you plan to fail.”

Even though it is a low percentage of the time, unplanned maintenance also happens, according to Jim Northcutt who is in charge of all maintenance and engineering for Frito-Lay’s 36 North American facilities. He coordinates the facility maintenance managers from the corporate office in Plano, TX, and executes a streamlined maintenance approach across all facilities.

“The company, as a whole, runs very efficiently,” Northcutt explained. “When we do have an unplanned event, the maintenance managers get their team marshaled around making sure they have the right tools and the right expertise to get it corrected and back online. There is not a silver bullet there. It is just really good people who work in our organization who are very talented.”

Best maintenance practices

Maintenance mechanics Mike Day and Dave Maddox oversee shop rebuilds.

Maintenance mechanics Mike Day and Dave Maddox oversee shop rebuilds.

Planning and scheduling is supported with an in-depth PM system, along with highly upgraded technology such as vibration analysis and ultrasound, and carefully crafted PdM (predictive-maintenance) processes.

For corrective work, the planners and schedulers go to the storage area and check out several parts and then kit them for the mechanics, Hoffman said. Then jobs are reviewed with the mechanics. “The key here is to make our mechanics as successful as possible by giving them the right equipment, the right parts, and the right tools to maximize wrench time. This way, when they are out on the floor they have everything they need. It eliminates travel time back and forth and maximizes our ability to perform corrective work and keep our plant in a reliable state.”

When the mechanics receive a schedule, it determines the location of the kitting bin. The bins are numbered and lettered so the mechanic can easily find them and be prepared to successfully perform the job.

The planning and scheduling foundation translates across all North American facilities, Northcutt said. “If you look at it in its most simplistic terms, we plan it, we schedule it, we execute it,” he said. “As a company, throughout all facilities, planning and scheduling is what we hang our hat on.” 

Other best practices include using condition-based approaches and the previously referenced predictive technologies, i.e., thermography, ultrasound, and vibration analysis. Staffing and development is also important, said Richard Cole, director of maintenance and engineering at the Fayetteville, TN, facility.

“It is crucial to have the right people in the right place,” Cole said. “We are continuously developing their skills. We leverage local junior colleges and trade schools to bring in students as interns to work with the mechanics and get training. We have a strong focus around processes and systems, planning and scheduling, work orders, and predictive maintenance. We must always be looking at continuous improvement from scorecards and action plans. Reward and recognition also plays a role in our maintenance strategy.”

Knowing the score

To stay on track, Frito-Lay believes in knowing the score.

“We track our downtime performance here very closely,” Hoffman said. “We have the ability, through technology, to monitor our line performance almost to the minute. I challenge my managers and my mechanics to always know the score. It’s just like how a racecar driver knows what lap he is on, how much fuel he has left, and how much air is in the tires—he knows when to make a pit stop. You always have to know where you stand against the target you set.”

Maintenance planners Tim Waller, Don Reynolds, and Jeff Tuck take a break in the maintenance-parts room. Planning, scheduling, and kitting parts is a key component of the overall maintenance strategy at Frito-Lay.

Maintenance planners Tim Waller, Don Reynolds, and Jeff Tuck take a break in the maintenance-parts room. Planning, scheduling, and kitting parts is a key component of the overall maintenance strategy at Frito-Lay.

Frito-Lay’s key performance indicators (KPIs) include safety scores, such as the number of days the facility has gone injury free. They also measure total downtime, equipment downtime, operation downtime, changeovers, and material-related downtime.

“We have to have our house in order and provide a stable, safe work environment for our operators,” Hoffman said. With multiple changeovers, the quality could go south fast and our operators become extremely frustrated. If we hold our equipment reliability at the highest level, our operators have a very good chance to have a successful day. It allows them to focus on their quality metrics, how their line is running, and how we are holding our product to the highest standard. This is especially important when [it comes to] making food.”

Northcutt said anyone at any of the facilities in the U.S. and Canada can immediately see the metrics.

“I’m an old football coach, and I believe in knowing the score,” Northcutt noted. “Mechanics and those running the equipment from an operations perspective all know the score. This includes everything from planning and scheduling to inventory control to efficiency. Our ability to focus in on performance to improve performance makes us unique as an organization. On a weekly basis, the operations and technical teams come together to talk about outages or failures and then they step back and consider if it’s systemic or a piece of equipment. We call that ‘fix it forever.’”

Frito-Lay promotes internal competitions among facilities to inspire the operations and maintenance teams to keep score on key metrics. The company provides performance reports and ranks the various sites in different categories. There is a national downtime competition throughout the year that measures uptime and unplanned downtime. Winning teams are recognized through various company incentives.

“In this business, we like to know if we won,” Hoffman said. “If you don’t keep the numbers visible to the team, and if they don’t think it’s important to the leadership, their motivation will falter. Keeping the score is the greatest motivational technique we have in this business. Talk to any of my mechanics, they will tell you that I’m all about watching the downtime numbers with a goal of minimal downtime.”

The Fayetteville site’s Richard Cole pointed out that the friendly competitive challenges across facilities are motivational, but the teams also remember they are ultimately on the same side. Successful new processes and systems are shared across sites and the camaraderie that develops is strong. Support is given throughout the company, whether it’s hands-on, directional, or coaching to help personnel at all Frito-Lay sites improve performance.

Keeping up with new technology

Maintenance mechanic Fred Luther uses ultrasound technology as part of routine predictive maintenance.

Maintenance mechanic Fred Luther uses ultrasound technology as part of routine predictive maintenance.

Because Perry is the largest, most complex Frito-Lay facility, it has become the test site for new technology.

“If there is a new piece of equipment, we have very close contact with corporate engineering and our research-and-development team. They want to bring it here and let us try to help make it successful or let us cut our teeth on it and prove it before we deploy it to other facilities,” Hoffman said.

The Perry facility also has technically apt teams. “We are blessed with some of the most highly skilled maintenance and technology professionals in the company,” he added. “So we get all the new toys. It’s kind of cool. It challenges us.”

The teams go through rigorous training with the equipment vendors and supplement it with training at local technical schools. They also solicit other vendors and suppliers to provide training programs and classes on new technology.

Leveraging improvement, energy, and reliability

According to Hoffman, many different facets of continuous improvement are introduced at the Perry site and throughout all Frito-Lay operations. Through root-cause analysis, issues are engineered to avoid repeat failures, and improvement programs are launched to upgrade or harden pieces of equipment to increase reliability.

The team also troubleshoots how to reduce utility consumption while maintaining reliability. They study how to reduce parts costs and the overall cost of making the product.

“Our primary focus in the reliability business is just that…how do we become more reliable?” Hoffman said. “A lot of continuous improvement involves hot teams. So if there is an issue on the floor, for example, repeat failures, or if the operations team cannot get to the quality metrics they need, we will launch a hot team right there. Often cases involve managers, maintenance technicians, and operations professionals. We’ll brainstorm and come up with ideas, call outside vendors, and find some potential improvements.”

Palletizing robots prep product for distribution.

Palletizing robots prep product for distribution.

The focus becomes more than just reliability issues. Hot teams are also formed to solve issues that surround quality, safety, and operation optimization, to reduce the overall cost of the production.

“When you have a major failure or breakdown on a manufacturing line, everything sits there running,” Hoffman said. “You are still using gas to keep the ovens and fryers hot. Electricity is making all the other motors turn, but if you’re not making product, you’re just wasting utilities. If you have a reliable plant, inherently you improve your utility usage because you make product when you are supposed to.”

Frito-Lay supports other programs, including combustion tuning, minimizing fuel usage, and reducing utility consumption.

Production consistency

PepsiCo, Frito-Lay’s parent company, recently celebrated its 50th anniversary. The corporate arm keeps a keen eye on maintaining consistency throughout its processes, Northcutt said.

“Anytime you have a multi-plant environment, you have to have consistency,” Northcutt said. “A Lay’s potato chip made in California or Canada has to taste the same as the ones produced in Georgia. One thing we did well many years ago was rolling out and making sure everyone had the same tools, the same CMMS, the same inventory control, and the same purchasing process. We rolled out ultrasound as our primary condition-based tool. Consistency from one site to the other is something that becomes really important. We make sure to have consistent applications and then everyone is on the same playbook.” MT

Michelle Segrest has been a professional journalist for 27 years. She has covered the industrial processing industries for nine years and toured manufacturing facilities in 28 cities in six countries on three continents.

Frito-Lay Fayetteville Facility Earns Maintenance Excellence Award

The Foundation for Industrial Maintenance Excellence (FIME) organization is dedicated to the recognition of maintenance and reliability as a profession. FIME sponsors the North American Maintenance Excellence (NAME) Award, which is an annual program that recognizes North American organizations that excel in performing the maintenance process to enable operational excellence.

Frito-Lay’s Fayetteville, TN, site was the recipient of the prestigious award in 2011.

Jim Northcutt and Richard Cole were heavily involved in fulfilling the stringent requirements to achieve this honor.

“Jim and I are constantly looking outside of Frito-Lay to study industry trends and best maintenance and manufacturing practices,” Cole said. “It’s important to have opportunities to see what other companies are doing and research new technologies to bring back to the organization.”

Through the NAME Award process and also finding industry partners, including the Univ. of Tennessee Reliability and Maintenance Center and organizations such as SMRP, Frito-Lay has been able to connect with various colleagues to benchmark performance.

“We like to challenge ourselves to find out how good we can possibly be,” Cole said. “This benefits our own culture, as well as the entire American manufacturing culture.”

During the lengthy application and selection process for the NAME award, Cole worked closely with Northcutt at the corporate level to see how the Fayetteville site stacked up as a world-class manufacturing facility.

FIME sends four to five technical experts to assess the site in many different categories for a week. “They then give assessment and let you know how you perform and where you need to improve” Cole said. “Our processes, systems, teams, skills, and leadership hit this high level, so we were recognized for the award.”

Frito-Lay was then able to use the Fayetteville site as an example for its other facilities.

The objectives of the NAME Award, which was established in 1991 as a nonprofit, are to:

  • Increase the awareness of maintenance as a competitive edge in cost, quality, reliability, service, and equipment performance.
  • Identify industry leaders, along with potential or future leaders, and highlight best practices in maintenance management.
  • Share successful maintenance strategies and the benefits derived from implementation.
  • Understand the need for managing change and stages of development to achieve maintenance and reliability excellence.
  • Enable operational excellence.

Winners of the NAME award are site-specific. Some years there are no winners and some years there are two or three winners. It’s a rigorous process, but those who qualify earn the award.

306

4:58 am
March 18, 2016
Print Friendly

People Culture and Change

“I love researching solutions to problems, evaluating the best path forward, and implementing improvements. Reliability has an endless supply of opportunity.” —Robert Bishop

“I love researching solutions to problems, evaluating the best path forward, and implementing improvements. Reliability has an endless supply of opportunity.”
—Robert Bishop

Robert Bishop combines technical expertise with leadership to improve reliability at Bristol-Myers Squibb Co.

By Michelle Segrest, Contributing Editor

A passion for people and equipment allowed Robert Bishop the opportunity to find his dream job at the intersection of reliability and systems improvement. “I enjoy dealing with the equipment side of things, but I also love to deal with people,” the Bristol-Myers Squibb Co. (BMS) maintenance engineer said. “I realized early on that this is part of who I need to be professionally.

With a degree in mechanical engineering from the Univ. of Rochester and a master’s of science in bioengineering from Syracuse Univ., Bishop had many career options. He worked in validation for 12 years and then had to make a decision to either be a lifer or diversify.

“I knew that if I didn’t do something soon, the decision would be made for me,” he said. “The opportunity came for the role I’m currently in, so I took the leap. From the first day I sat in this chair, I’ve never regretted it. If I could sit down and create the perfect job for myself, I couldn’t come up with a better fit.”

Three and a half years later, Bishop has balanced his technical skills with strong management skills to launch and implement many successful reliability programs for BMS.

“The best thing about my responsibilities is the ability to enhance and improve our systems,” he said. “No matter where you are on the continuum there is opportunity to improve. Technology is always changing and people are always joining the team. I love researching solutions to problems, evaluating the best path forward, and implementing improvements. Reliability has an endless supply of opportunity.”

0316f1credentials

Maintenance and reliability philosophy

Bishop said he believes strongly that action is more impactful than ideas.

“My overall reliability philosophy is to create robust systems, to educate your team, and then get out of the way and let them be successful,” he said. “People are more important than knowledge. I try to remind myself that it’s great to have a lot of ideas, but if we don’t actually do anything, we are never going to go anywhere. You can’t just drag your feet forever. You can force people to do what you want, but if you don’t invest in the people and acknowledge that they are the ones that makes things happen, you’re not going to see that benefit for the long run.”

Bishop works with 550 other employees at the BMS biological site in Syracuse, NY. The equipment is similar to what is typically used in a brewery, but with more filtration and chromatography steps. His team of 10 maintenance professionals works on tanks, filters, pumps, gearboxes, skid-based equipment, centrifuges, chromatography, and filtration skids. The larger team involves about 100 people at the site responsible for facilities and engineering. Bishop serves as a maintenance engineer but also is the acting maintenance manager, so he is responsible for maintaining the equipment, as well as the asset-management department and the CMMS system. The non-process equipment is handled through an outsourced maintenance company and there is also a facilities-management group.

Bishop’s connection with people extends to mentoring others to reach their goals and succeed.

He remembers an example when a young woman within a different organization at the site had an interest in reliability but didn’t have any background in it. “Over the course of about a year we had some meetings, lunch-and-learns, and many discussions on the topic,” Bishop said. “I provided her with reading material and links to webinars that would help her to learn. She recently sat for her Certified Reliability Leader exam and passed. I’m very proud of her and know that someday she will have a more formal role in the field of maintenance and reliability.”

Although Bishop spends each day in strategy meetings, but also solves day-to-day issues. He drives root-cause analysis, launches new systems, and is involved in upgrades to the CMMS system. One of his most successful best-maintenance practices is reporting by exception. “I don’t need to know when everything is going well. I need to know when things are not going as planned so I can communicate to the larger organization,” Bishop said. “I try to look for what isn’t supposed to be there.  For example, when you look at the integrity of the data in our CMMS system, you can create all the reports you want. Sometimes, it is beneficial to go look for things you don’t expect to find. For example, I don’t expect to find a blank priority field. But if I write a query for that and pull up all work orders that have blank priority fields I can ask ‘Why?’ I share an office with our reliability engineer and we report to different reporting structures within our larger facilities but we work closely together and there are a lot of topics that flow back and forth.”

Bishop focuses on high-value work. “We all could spend 90 hours a week working and still not get everything done. We have to identify where to put our effort.”

His commitment to people and processes does not go unnoticed by his peers.

“Robert is a well-respected member of the reliability community both internally at Bristol-Myers Squibb Company, and externally,” said George Williams, BMS associate director of asset management, Global Facilities Services. “Robert was awarded the BMS Reliability Excellence Leader of the Year award for 2015. Additionally, he was a finalist for the SMRP Rising Star award and leads their Biologics and Pharmaceuticals VSIG. He is a contributing author and presenter at multiple conferences annually. Rob consistently looks to contribute, collaborate, and improve what we do every day. His ideas have turned into standardized approaches for BMS shared throughout our network and helping to drive us to reliability sustainability.”

“With all of the achievements and accolades, most notable is that Rob is a leader. He is humble, gracious, and looks to develop others, which creates an environment where everyone contributes and feels welcomed. He has a rare combination of skills and knowledge, combined with drive, motivation, and impeccable soft skills to navigate the difficult terrain of a global company.”

Programs that make a difference

0316f1-tipsBishop is proud of several programs he has driven. He implemented one for paperless work orders that saved the company 120,000 pages of paper/year and also saved four full-time equivalent (FTE) efforts. However, he repurposed the people and no one lost their job. The program made valuable data available in real time while improving the quality of work.

He also drove a year-long PM-optimization program and implemented a lubrication-enhancement program that allowed closed systems, consolidated lubricants, and visual-management improvements.

The lubrication program focused on a BMS site that’s been around since 1943. It was originally a facility that produced penicillin during WWII, and had gone through a lot of evolutions through the decades. Many  of the lubricants on site were not needed. In fact, some of the drums of oil were 10 to12 years old.

“There were lubricants with slow turn, and it just wasn’t ideal,” Bishop recalled. “We didn’t know where everything went. A maintenance technician, who was here for 30 years, had a cheat sheet and knew which oil went in which gearbox. It worked great, but was not a very robust system. When I came into my role here, I took it upon myself to pull together a team that analyzed where everything was being used, and then we brought in one of our vendors who helped us consolidate.”

The program allowed the site to downgrade from 46 lubricants to just eight oils and four greases.

“We closed up the systems provided by the manufacturer on our gearboxes and level indicators. In most cases we used a sight-glass tube,” Bishop explained. “We closed the systems on the larger ones and installed Quick Connect so we could use a filter cart. We installed sample ports with dip tubes and we started doing oil sampling near where it is being used in the gears and not just in the bottom of the gearbox. We started the oil-sampling program to drive increased reliability. We weren’t necessarily having a lot of failures because of poor lubrication, but we had a lot of practices that weren’t ideal.”

The program included taking steps to do things through visual management. Now, gearboxes have a tag that indicates what is inside. It also identifies the viscosity and the manufacturer, and the same tag is on the oil container that is brought out to the field. An identical tag is on the oil-filtration skid.

It took about a year to transform into a closed system so no moisture or particles find their way into the gearboxes. “It was definitely worth the effort,” Bishop said. “We now have one of the better lubrication programs that I’ve ever seen. Nothing’s perfect, but we now have a very robust system.”

Challenges with change

0316f1-quoteBishop said he has always enjoyed change and the positive impact it can have on reliability systems. But sometimes it is difficult to convince others that change is a good thing.

“The biggest challenge is convincing people that improving systems and reducing workload will not result in reduced headcount,” Bishop said. “I point to my track record, and it speaks for itself. My goal is never to get rid of people. The people I work with know they can trust me. I wouldn’t say something and then do something else. For people who don’t know me, I am very proactive about addressing this.”

Bishop relies on tools such as a Best In Class (BIC) weekly meeting where all crew supervisors get together with a common goal to continuously improve and help each other. They use other tools such as ARMED software, which can identify KPI and reliability data such as a top-10 bad-actor list. Bishop also uses his more than 10 years of experience in the field of equipment qualification and validation—experience that has provided him with a robust understanding of documentation, quality systems, and equipment.

The greatest tool that Bishop uses is his ability to connect people with culture and change. “I always want to improve,” he said. “I always appreciate the people involved, and I know what it takes to change culture. It isn’t always easy, but it is always possible. It doesn’t have to be a huge project. It can be small, incremental things. But I’m a supporter of change. We must always strive to improve.” MT

Michelle Segrest has been a professional journalist for 27 years. She has covered the industrial processing industries for nine years. If you know of a maintenance and/or reliability expert who is making a difference at their facility, please drop her an email at michelle@navigatecontent.com.

505

3:38 am
March 18, 2016
Print Friendly

Monitoring Slow-Speed Bearings With Ultrasound

An ultrasound program associated with the critical coal-handling conveyor system at Dakota Gasification’s Great Plains Synfuels Plant is proving that catastrophic slow-speed bearing failures can be avoided.

An ultrasound program associated with the critical coal-handling conveyor system at Dakota Gasification’s Great Plains Synfuels Plant is proving that catastrophic slow-speed bearing failures can be avoided.

This maintenance professional’s account of his site’s experience with ultrasound technology lays out details that can help others.

By Ron Tangen, CMRP, Dakota Gasification Co.

Dakota Gasification Co. (DGC), a for-profit subsidiary of Basin Electric Power Cooperative, Bismarck, ND, owns and operates the Great Plains Synfuels Plant, a coal gasification complex near Beulah, ND. The plant produces pipeline-quality synthetic natural gas and related products.

It’s a given that coal gasification requires a conveyor system to transport coal. As others may have, Dakota Gasification has experienced its share of frustration with conveyor-bearing performance. An evaluation of the problem suggested that it was impossible to eliminate failures. With this in mind, we set a goal to simply minimize the occurrence of catastrophic slow-speed bearing (SSB) failures. This meant finding a technology that allowed us to accurately monitor bearing condition and maximize service life, as well as alert us when SSBs reached the end of their life, so we could remove them from the system.

DGC’s Coal Handling Operations group recognized the SSB issue several years earlier and, as a result, established a policy of weekly walk-downs of the conveyor system. During these walk-downs, technicians would perform a physical-senses evaluation of each of the main pulley bearings. At some point, a handheld infrared pyrometer was added to their toolbox to help identify failing bearings. Even though it operated close to the bottom of the P-F (potential failure) Curve, this strategy was better than no program, and some successes were realized. Still, we experienced two to four failures per year in a 400-bearing system.

Predictive maintenance (PdM) strategies operating this close to equipment failure inherently have some shortcomings—even when looking at SSBs. With our coal conveyors, the short window for failure detection is a key disadvantage. Bearings often don’t enter the “visual senses” portion of the failure curve until just days or hours prior to catastrophic failure. Operations personnel might perform inspections and report no problems, only to have a failure in the same week. Another disadvantage is the increased cost of catastrophic failure over planned replacement: safety aspects, manpower, production loss, and collateral damage costs far exceed bearing cost. Such issues helped justify transition to an ultrasound-based program.

Leveraging ultrasound for SSB monitoring, we’ve been able to move from one end of the P-F curve to the other—from reactive/failure to predictive maintenance. We would have considered ultrasound a success if it had provided even a few days or weeks of early detection. With this technology, however, we can now produce a report that identifies bearings at risk of failure as much as 12 months out.

Every five weeks, technicians on nine different routes collect data from the main pulley bearings in the coal-handling conveyor and then download it into ultrasound software for archiving, trending, and analysis.

Every five weeks, technicians on nine different routes collect data from the main pulley bearings in the coal-handling conveyor and then download it into ultrasound software for archiving, trending, and analysis.

Background

DGC had been using ultrasound for about 15 years in other applications. As we considered our SSB failure problem, the structure-borne nature of bearing applications led us to look into acoustical ultrasound. After trial testing the technology on our conveyor bearings, we conducted a hands-on field demonstration with our operations superintendent. This, we hoped, would be the key to securing high-level management buy-in that was so crucial to the program’s success.

Once the operations superintendent donned the ultrasound equipment and was able to “hear” a couple of bearings for himself, he began working his way down the conveyor galley to listen to the rest. Upon reaching the end of the galley, he directed us to implement the technology at DGC as “soon as possible.”

 The Great Plains Synfuels Plant, a commercial-scale coal gasification complex near Beulah, ND, produces pipeline-quality synthetic natural gas and related products.

The Great Plains Synfuels Plant, a commercial-scale coal gasification complex near Beulah, ND, produces pipeline-quality synthetic natural gas and related products.

Program details

Since our coal-handling system includes nine major buildings, we created nine ultrasound routes. Technicians from the operations group perform data collection every five weeks. There’s no magic to this interval: We simply wanted a short-enough frequency that would provide two or three readings within a failure cycle. Also, as operations personnel were accustomed to weekly conveyor-system walk-downs, a five-week interval would help gain program buy-in through reduced manpower and avoidance of the “monthly route” syndrome.

Data from routes (collected only on main pulley bearings, not idler bearings) are downloaded into the ultrasound software for archiving, trending, and analysis. Two key elements of the data are the recorded ultrasound wave file and bearing decibel (dB) value. The wave file contains several seconds of digitally recorded sound; the dB value represents its intensity. Without these elements, analysis wouldn’t be possible.

Inner-race spalling in a failed bearing.

Inner-race spalling in a failed bearing.

Analysis process

Several features of our ultrasound tools have been particularly helpful.

Our ultraprobe data collectors allow us to digitally record several seconds of wave-file (sound) data. In addition, a high level of ultraprobe sensitivity allows us to detect small failure modes, i.e., we can track a bearing’s health through its entire life. For example, many bearings are classified as having a “zero-dB fault”—meaning a cyclical fault inside them can clearly be heard (and seen), while the ultrasound equipment is registering a 0-dB sound level (Fig. 1) Although the bearing reflected in Fig. 1 has a fault, it may be years, or even decades, from failure. Typically, we don’t consider replacing a bearing until the sound level is in the 25-to-30-dB range.

Fig. 1. Zero dB fault.

Fig. 1. Zero dB fault.

The ultrasound software lets us replay wave files to interpret sound signatures and to analyze dB-value changes over time. This helps establish trend history and future risk of failure. The software also helps us see the amplitude and pattern of a sound—which is a valuable capability when analyzing bearing health.

One challenge that comes with high sensitivity is the presence of competing (structure-borne) ultrasound sources that might be heard along with the bearing sound signature. Competing sources can come from coal (product) falling onto belts or through metal chutes, gearbox noise, or a nearby bad idler bearing. Listening to the sound of bearings, what you actually hear are impacting and/or white-noise frictional forces.

Impacting is short-duration frictional forces caused by failure modes such as particle contamination, pitting, spalling, fretting, or broken parts, i.e., a cracked race.

White noise is caused by constant frictional forces. Even “good” bearings have some white noise, i.e., all have some level of constant frictional force acting on them as they rotate. Elevated levels could be a result of new/tight bearings, high dynamic loading, inadequate lubrication, or misalignment.

Fig. 2. Classification is ‘OK.’

Fig. 2. Classification is ‘OK.’

 

Fig. 3. Classification is ‘Moderate Impacting.’

Fig. 3. Classification is ‘Moderate Impacting.’

 

Fig. 4. Classification is ‘Moderate White Noise.’

Fig. 4. Classification is ‘Moderate White Noise.’

The recorded wave file is transferred to the software for analysis. This is where the bearing’s signature is established. Each wave file has a unique signature relating to the bearing condition. To simplify things, we evaluate this signature in terms of its impacting and white noise. While signatures may clearly be dominant in one way, a bearing typically reflects a mix of impacting and white noise. Determining how much and what level of each helps establish the bearing’s overall health (Figs. 2, 3, 4). This signature and the dB value of the bearing provide insight into the level of deterioration.

Since our bearings operate at speeds between 70 and 80 rpm, FFT (Fast Fourier Transform) analysis isn’t feasible. As a result, all analysis on these SSBs is done through our ultrasound equipment’s time series analysis software and interpretation of trended decibel values.

Analyzing SSBs with ultrasound allows us to accurately monitor bearing health over the component’s entire life. This, in turn, allows time to recognize a bearing that’s nearing the end of its useful life or catch one that’s moving into catastrophic failure.

Decibel values charted within the ultrasound software provide important information regarding a bearing’s historical and current conditions—and can even be used to gain insight into the future failure risk. Since individual data points seldom plot in a straight line, “normalizing” the information by drawing a straight line through it provides a more linear perspective in establishing a bearing’s historical and projected trends. The line’s slope represents the component’s rate of failure:

  • A slowly rising slope indicates a bearing that’s deteriorating/failing slowly.
  • A rapidly rising slope indicates a bearing that’s deteriorating/failing rapidly.

Extending the historical trend line into the future allows users to anticipate future bearing health and risk of failure (Fig. 5). The intersection of this line with the established target decibel level for bearing replacements (30 dB in Fig. 5) can provide good clues as to remaining operating life and the timeframe available for a planned replacement.

Decibel values charted within the ultrasound software provide important information regarding a bearing’s historical and current conditions. Extending the historical trend line into the future allows users to anticipate future bearing health and risk of failure. The line’s slope represents the component’s rate of failure: The intersection of this line with the established target decibel level for bearing replacements (30 dB) can provide good clues as to remaining operating life and the timeframe available for a planned replacement. Typically, DGC doesn’t consider replacing a bearing until the sound level is in the 25-to-30-dB range.

Decibel values charted within the ultrasound software provide important information regarding a bearing’s historical and current conditions. Extending the historical trend line into the future allows users to anticipate future bearing health and risk of failure. The line’s slope represents the component’s rate of failure: The intersection of this line with the established target decibel level for bearing replacements (30 dB) can provide good clues as to remaining operating life and the timeframe available for a planned replacement. Typically, DGC doesn’t consider replacing a bearing until the sound level is in the 25-to-30-dB range.

Based on analysis of the sound signature and recorded dB level, the bearing can be classified, or graded, regarding its condition. To maintain consistency in our grading process, we developed a failure-classification chart. This chart has been periodically updated over the years to align with the actual deterioration in bearings we’ve removed from the field. Moreover, it only applies to DGC’s conveyor bearings—mostly 3- to 4-in.-dia. spherical roller designs from one manufacturer. The bearing-classification information is also documented in the ultrasound software for purposes of reference and reporting.

Rolling-element spalling.

Rolling-element spalling.

Value-added insight

Our ultrasound program has provided some value-added insights beyond what we had expected from the technology. Examples include helping track new bearing “wear-in.” We originally anticipated that when a “bad” bearing was replaced, the decibel reading on the “good” bearing would be significantly lower, i.e., close to a normal operational baseline. In fact, after pulling 25- to 30-dB impacting bearings from the conveyor system, we’ve discovered 25- to 30-dB white-noise signatures on their replacements.

Although manufacturers suggest a bearing will wear in over a few days or weeks, ultrasound has shown us a few months to years may be more likely. This isn’t to say OEMs are wrong, but rather to point out ultrasound’s high degree of sensitivity to frictional activity within bearings.

This graph represents what the author has come to identify as a “typical” ultrasound bearing life cycle. It shows an upward slope as the bearing goes into “failure,” and a downward sloping bearing wear-in period as it returns to baseline. In this example, he manually inserted two ‘markers’ into the trend data: one to represent when the bearing was replaced, and another to establish the anticipated new baseline

This graph represents what the author has come to identify as a “typical” ultrasound bearing life cycle. It shows an upward slope as the bearing goes into “failure,” and a downward sloping bearing wear-in period as it returns to baseline. In this example, he manually inserted two ‘markers’ into the trend data: one to represent when the bearing was replaced, and another to establish the anticipated new baseline

DGC’s ultrasound experience also suggests correctly lubricated SSBs can survive failure modes that quickly fail bearings in high-speed applications. Some of our SSBs have operated with a cracked race for several years. One that we removed and analyzed had significant spalling on the inner race and rolling elements; its outer race had a chip, crack, and break.

A failed bearing with an outer-race crack, chip, and break.

A failed bearing with an outer-race crack, chip, and break.

Ultrasound has also provided insight on the ability of SSBs to recover from certain levels of failure modes. In one case, we saw a bearing decibel trend move, over a 3-yr. period, into accelerated failure and recovery four times. At first glance, the random and inconsistent trend seemed to point to a data-collection problem. A closer look, however, indicated the bearing actually went through insipient failure and recovery. In the end, as we normalized the data, we were still able to establish an overall failure rate.

Despite some challenges since its implementation, no one has said the slow-speed bearing ultrasound program isn’t worth the effort or is not delivering value for the site’s owner/operator Dakota Gasification Co.

Despite some challenges since its implementation, no one has said the slow-speed bearing ultrasound program isn’t worth the effort or is not delivering value for the site’s owner/operator Dakota Gasification Co.

Where we are

Although we at DGC still have much to learn about ultrasound, I’m pleased with the progress of our SSB monitoring program. Despite some challenging situations, no one has said the program isn’t worth the effort or not delivering value to our company. Just as important is the fact that managers are now requesting quarterly predictions of high-risk bearings rather than a single annual report.

To date, we don’t have a calculated statistical level of improvement for the program, but, the number of visibly damaged components in our bearing showcase continues to grow every year. Each of these bearings represents a catastrophic failure that was avoided—and dollars added to DGC’s bottom line. MT

Ron Tangen is a maintenance-engineering specialist for Dakota Gasification Co. (dakotagas.com), a for-profit subsidiary of Basin Electric Power Cooperative (basinelectric.com). A Certified Maintenance and Reliability Professional (CMRP), Tangen is based at the company’s Great Plains Synfuels Plant, a commercial-scale coal gasification complex near Beulah, ND. For more information on this article, email him at rtangen@bepc.com.

315

2:01 am
March 18, 2016
Print Friendly

Make Sense of Electrical Signals

1603felectsigs16p

Waveforms translate to crucial information about the health of your electrical/mechanical equipment systems. How do you read them?

Edited by Jane Alexander, Managing Editor

Devices that convert electrical power to mechanical power run the industrial world. Think pumps, compressors, motors, conveyors, and robots. Voltage signals that control these electro-mechanical devices are a critical but unseen force. The question is, how do you capture and  and anlyze that unseen force?

Oscilloscopes (or scopes) test and display voltage signals as waveforms, i.e., visual representations of the variation of voltage over time. The signals are plotted on a graph, which shows how the signal changes. The vertical (Y) access represents the voltage measurement and the horizontal (X) axis represents time.

According to the technical experts at Fluke Corp., Everett, WA, an oscilloscope graph can reveal important information, including:

  • voltage and current signals when equipment is operating as intended
  • signal anomalies
  • calculated frequency of an oscillating signal and any variations in frequency
  • whether a signal includes noise and changes to the noise.

Most of today’s oscilloscopes are digital—which enables more detailed, accurate signal measurements and fast calculations, data-storage capabilities, and automated analysis. Handheld digital oscilloscopes offer several advantages over benchtop models. They are battery operated, use electrically isolated floating inputs, and offer the advantage of embedded features that make oscilloscope usage relatively easy and accessible to a variety of workers.

Oscilloscope functions

Sampling. This is the process of converting a portion of an input signal into a number of discrete electrical values for the purpose of storage, processing, and display. The magnitude of each sampled point is equal to the amplitude of the input signal at the time the signal is sampled.

Fig. 1. Sampling and interpolation: Sampling is depicted by the dots while interpolation is shown as the black line.

Fig. 1. Sampling and interpolation: Sampling is depicted by the dots while interpolation
is shown as the black line.

The input waveform appears as a series of dots on the display (Fig. 1). If the dots are widely spaced and difficult to interpret as a waveform, they can be connected using a process called interpolation, which connects the dots with lines, or vectors.

1603felectsigs02g

Fig. 2. Unknown trace adjusted for 3 to 6 vertical divisions.

 

Fig. 3. Unknown trace adjusted for 3 to 4 periods horizontally.

Fig. 3. Unknown trace adjusted for 3 to 4 periods horizontally.

 

Fig. 4. Trigger point is set to the 50% point but, due to the aberration on the leading edge in the second period, an additional trigger results in an unstable display.

Fig. 4. Trigger point is set to the 50% point but, due to the aberration on the leading edge in the second period, an additional trigger results in an unstable display.

 

Fig. 5. Trigger level adjusted to a unique repetitive position, outside the aberration on the second period.

Fig. 5. Trigger level adjusted to a unique repetitive position, outside the aberration on the second period.

Triggering. Trigger controls allow users to stabilize and display a repetitive waveform.

Edge triggering is the most common form of triggering. In this mode, the trigger level and slope controls provide the basic trigger-point definition. The slope control determines whether the trigger point is on the rising or the falling edge of a signal, and the level control determines where on the edge the trigger point occurs.

When working with complex signals such as a series of pulses, pulse-width triggering may be required. With this technique, the trigger-level setting and the next falling edge of the signal must occur within a specified time span. Once these two conditions are met, the oscilloscope triggers.

Single-shot triggering is a technique by which the oscilloscope displays a trace only when the input signal meets the set trigger conditions. Once the trigger conditions are met, the oscilloscope acquires and updates the display, and then freezes the display to hold the trace.

Fig. 6. If the two waveform components aren’t symmetrical, there may be a problem with the signal.

Fig. 6. If the two waveform components aren’t symmetrical, there may be a problem with the signal.

 

Fig. 7. Use cursors and the gridlines to evaluate the rise and fall times of the leading and trailing edges of a waveform.

Fig. 7. Use cursors and the gridlines to evaluate the rise and fall times of the leading and trailing edges of a waveform.

 

Fig. 8. Use horizontal cursors to identify amplitude fluctuations.

Fig. 8. Use horizontal cursors to identify amplitude fluctuations.

Getting a signal on the screen. The task of capturing and analyzing an unknown waveform on an oscilloscope can be routine, or it can seem like taking a shot in the dark. In many cases, taking a methodical approach to setting up the oscilloscope will capture a stable waveform or help you determine how the scope controls need to be set so that you can capture the waveform.

The traditional method of getting a signal to show properly on an oscilloscope is to manually adjust three key parameters to try to achieve an optimum set-point—often without knowing the correct variables:

  • vertical sensitivity: Adjust the vertical sensitivity so that the vertical amplitude spans approximately three to six divisions.
  • horizontal timing: Adjust the horizontal time per division so that there are three to four periods of the waveform across the width of the display.
  • trigger position: Set the trigger position to the 50% point of the vertical amplitude. Depending on the signal characteristics, this action may or may not result in a stable display.

These three parameters, when adjusted properly, show you a symmetrical “trace,” the line that connects the samples of the signal to create the visual depiction of the waveform. Waveforms can vary indefinitely from the most common sine wave that ideally mirrors between positive and negative on the zero axis point or a unidirectional square wave typical of electronic pulses, or even a shark-tooth form.

The manual setup method often requires tediously adjusting the settings to make the waveform readable in order to analyze it. In contrast, some modern oscilloscopes automate the process of digitizing the analog waveform to see a clear picture of the signal.

Fig. 9. Evaluate waveform DC offsets.

Fig. 9. Evaluate waveform DC offsets.

 

Fig. 10. Evaluate period-to-period time changes.

Fig. 10. Evaluate period-to-period time changes.

 

Fig. 11. A transient is occurring on the rising edge of a pulse.

Fig. 11. A transient is occurring on the rising edge of a pulse.

Understanding and reading waveforms

The majority of electronic waveforms encountered in the workplace are periodic and repetitive—and they conform to a known shape. As you train your eye to understand these waveforms, consider their varying dimensions:

  • Shape. Repetitive waveforms should be symmetrical. That is, if you were to print the traces and cut them in two like-sized pieces, the two sides should be identical. A point of difference could indicate a problem.
  • Rising and falling edges. Particularly with square waves and pulses, the rising or falling edges of the waveform can greatly affect the timing in digital circuits. It may be necessary to decrease the time per division to see the edge with greater resolution.
  • Amplitude. Verify that the level is within the circuit operating specifications. Also check for consistency, from one period to the next. Monitor the waveform for an extended period of time, watching for any changes in amplitude.
  • Amplitude offsets. DC-couple the input and determine where the ground reference marker is. Evaluate any DC offset and observe if this offset remains stable or fluctuates.
  • Periodic wave shape. Oscillators and other circuits will produce waveforms with constant repeating periods. Evaluate each period in time using cursors to spot inconsistencies.
Fig. 12. This figure shows ground reference-point measurement indicating induced random noise.

Fig. 12. This figure shows ground reference-point measurement indicating induced random noise.

 

Fig. 13. Excessive ringing occurring on the top of the square wave.

Fig. 13. Excessive ringing occurring on the top of the square wave.

 

Fig. 14. This pattern shows a momentary change of approximately 1.5 cycles in the amplitude of the sine wave.

Fig. 14. This pattern shows a momentary change of approximately 1.5 cycles in the amplitude of the sine wave.

 

Fig. 15. Performing a frequency measurement on a crystal oscillator that has been trend-plotted over an extended period can highlight the effect of drift caused by temperature changes and aging.

Fig. 15. Performing a frequency measurement on a crystal oscillator that has been trend-plotted over an extended period can highlight the effect of drift caused by temperature changes and aging.

Waveform anomalies

The following items reflect typical anomalies that may appear on a waveform, along with the typical sources of such anomalies.

  • Transients or glitches. When waveforms are derived from active devices such as transistors or switches, transients or other anomalies can result from timing errors, propagation delays, bad contacts, or other phenomena.
  • Noise. Noise can be caused by faulty power-supply circuits, circuit overdrive, crosstalk, or interference from adjacent cables. Or, noise can be induced externally from sources such as DC-DC converters, lighting systems, and high-energy electrical circuits.
  • Ringing. Ringing is seen mostly in digital circuits and in radar and pulse-width-modulation applications. It shows up at the transition from a rising or falling edge to a flat DC level. To check for excessive ringing, adjust the time base to give a clear depiction of the transitioning wave or pulse.
  • Momentary fluctuation. Momentary changes in the measured signal generally result from an external influence such as a sag or surge in the main voltage, activation of a high-powered device that is connected to the same electrical circuit, or a loose connection.
  • Drift. Manifested as minor changes in a signal’s voltage over time, drift can be tedious to diagnose. Often the change is so slow that it is difficult to detect. Temperature changes and aging can affect passive electronic components such as resistors, capacitors, and crystal oscillators. One problematic fault to diagnose is drift in a reference DC voltage supply or oscillator circuit. Often, the only solution is to monitor the measured value (VDC, Hz). MT

This article was edited using information supplied by technical experts at Fluke Corp.,  Everett, WA. For more information on this and other testing and measurement topics and technologies, visit fluke.com.

Diagnosing Problems and Troubleshooting

Technical experts at Fluke Corp., note that while successful troubleshooting is an art and a science, adopting a methodology and relying on the functionality of an advanced oscilloscope can greatly simplify the process.

The time-tested approach known as KGU (known good unit) comparison builds on a simple principle: An electronic system that is working properly exhibits predictable waveforms at critical nodes within its circuitry, and these waveforms can be captured and stored.

A reference library of waveforms of a KGU can be stored on some oscilloscopes or printed out to serve as a hard-copy reference document. If the system or an identical system later exhibits a fault or failure, waveforms can be captured from the faulty system—called the device under test (DUT)—and compared with their counterparts in the KGU. Consequently, the DUT can either be repaired or replaced.

CAUTION: For the correct and safe use of electrical test tools, it is essential for operators to follow safety procedures as outlined by their company and local safety agencies.

223

1:39 am
March 18, 2016
Print Friendly

Implement an Oil-Analysis Program

Chemical Laboratory,Hand holding the tube with test flask

Chemical Laboratory,Hand holding the tube with test flask

Keeping a close eye on the life-blood of your lubricated equipment systems pays off in many ways, all of them crucial.

By Ken Bannister, MEch Eng (UK) CMRP, MLE, Contributing Editor

When a doctor wants to assess the condition of your health, he or she may order a blood test. Similarly, oil analysis, sometimes referred to as “wear-particle analysis,” is a mature condition-based maintenance approach used to determine the health of a machine and its lubricating oil. The process involves taking a small sample of oil from the equipment’s lubrication system, comparing it to a virgin stock sample through a series of laboratory tests, and examining the results to ascertain the “wellness” of machinery and oil.

Oil analysis reflects a highly effective and inexpensive means of deciding when to change lubricants based on condition; predicting incipient bearing failure so that appropriate action can be taken in a timely manner to avert failure; and diagnosing bearing failure should it occur. Yet, despite its availability and proven track record since the 1940s, oil analysis is still misunderstood and overlooked as a proactive strategy in many of today’s industrial plants. Relatively easy to set up, this type of program should be implemented in any facility that purchases, stores, dispenses, changes, uses, or recycles lubricants as part of its manufacturing or maintenance process.

Basic implementation steps

A successful oil-analysis program can pay for itself in a matter of weeks, given the fact that it delivers multiple benefits, including:

  • oil change intervals (often extended) that are optimized to the machine’s ambient conditions and operational use requirements
  • probable reduction in lubricant-inventory purchase costs and spent-lubricant disposal costs
  • enhanced understanding of how bearings can fail (or are failing) in their operating environment so that such incidents can be controlled or eliminated
  • increased asset reliability, availability, and production throughput.

(NOTE: The potential for program success is greater if a site already has a work-management approach in place, thereby assuring completion of corrective actions in a timely manner whenever oil-analysis reports recommend them.)

Similar to other successful change-management initiatives rolled out across the organization, an oil-analysis program will benefit from a piloted, phased implementation. Taking a stepped approach allows management and workforce alike to become accustomed to the new sampling and reporting processes and quickly iron out any problems prior to a full-scale launch.

Step 1: Appoint a program champion.

All programs require a “go to” decision-making person who advocates on the initiative’s behalf and is committed to making the implementation a success. The champion should be at a supervisor or manager level.

Step 2: Choose a suitable pilot area/machine.

Oil analysis begins with sampling the oil and can include lubricating and hydraulic fluids. Choosing a suitable program pilot will depend on the type of industry and business operation. Typical starting points to evaluate might include:

  • critical product, process, line, or major piece of equipment, i.e., criticality determined by constraint and/or lack of back up, downtime costs, and product quality
  • mechanical equipment with moving components that include lubricant reservoirs for re-circulating-oil-transmission systems that are mechanical and/or hydraulic in design.

Step 3: Conduct a lubricant audit.

A lubricant audit, required to identify what lubricants are currently employed in service at the plant, calls for the following:

  • Check work-order system PM (preventive maintenance) job plans for lubricant specification(s).
  • Check on or near the lubricant reservoir for lubricant identification labels or stickers.
  • Check for matching MSDS (Material Safety Data Sheets).

If a discrepancy is found at this stage, outside assistance from a lubrication expert or supplier may be needed to determine if the correct lubricants are being specified for particular applications.

Step 4: Choose a laboratory.

Not all oil-analysis laboratories are created equal, making your choice of one an important step. Most oil-analysis reports are divided into four major sections that provide:

  • sampling and virgin-oil specification data
  • spectral-analysis testing results for wear elements identified as lubricant additives or contaminants
  • additional physical test results for viscosity, water, glycol, fuel, soot, and acidity
  • associated conclusions and recommendations.

Some laboratories specialize in engine-oil analyses that focus more on physical testing for water, glycol, fuel, and soot. Others specialize in industrial-sample analyses that focus more on wear-particle evaluations and some physical tests for viscosity, water, and acidity, and post-mortem testing for root-failure causes using ferrographic techniques. Some laboratories have technicians that specialize in both areas.

The key to any testing program is receiving results in a timely and consistent manner, especially where critical equipment is involved. When interviewing laboratories, be sure to rate their sample “turnaround” time and how they can assure testing consistency (usually through use of dedicated technicians to test your samples). Working with a laboratory should be viewed as a long-term relationship. The chosen facility will build and analyze your complete data history and make conclusions and recommendations based not only on your current sample versus its virgin sample counterpart, but also on an understanding of your plant ambient conditions and overall trending history of each sample.

Step 5: Set up a pilot sampling program.

A good laboratory will work with you to set up your sampling program, supply (in some way) sampling-point hardware, extraction pumps, and quality sample bottles, as well as train your staff to consistently collect “clean” oil samples.

The best oil samples contain maximum data density with minimum data disturbance—meaning the sample should best represent the oil’s condition and particulate levels as it flows through the system or as it sits in a reservoir. For example, if you extract a sample from the bottom of a reservoir in a non-pressurized gearbox lubrication system, the particulate fallout will be dense due to large wear particles and/or sludge accumulation and not correctly represent the remaining 80% to 90% of reservoir lubricant that actually lubricates the gears.

In a pressurized re-circulating lubrication system, samples are best taken as the machine is running and at operating temperature, from a live fluid zone where the lubricant is flowing freely. Whenever possible, the sample should be extracted from an elbow, thereby taking advantage of the data density caused by fluid turbulence. Sample points are best located downstream of the lubricated areas to catch any wear elements before they’re filtered out by inline pressure or gravity filters.

Virgin samples of all lubricants in the pilot program will need to be collected and sent to the laboratory for checking. They’ll be used as a benchmark for the laboratory to measure and understand what additive ingredients and lubricant condition represents a normal state. This type of benchmarking will lead to easier identification of additive depletion and wear elements in subsequent samples. 

Outside assistance/training from a lubrication expert or oil-analysis laboratory is advisable when setting up the pilot sample points.

Step 6: Set up a work-management approach to sampling.

Lubricant sampling must be performed consistently, on a frequent basis—making it a suitable candidate for a maintenance/asset-management work-order system. Using the written sampling procedure as a job plan, the task can be set up effectively through PM scheduling software.

Extracting and sending a sample to the laboratory is only the first half of the oil-analysis process. Someone (usually the planner, if one exists) has to receive the results electronically by email, read the recommendations, and take any necessary corrective action and/or file the laboratory report electronically to history, usually as an attachment to the PM sampling work order. This will require development of a workflow procedure—and training all maintenance staff involved in the program on the procedure. 

Step 8: Commence sampling and program roll-out.

An oil-analysis program will identify major contamination and wear problems with the first sample set. Sample trending can begin with the third set, wherein the site starts identifying/predicting any negative trend toward potential failure and schedule corrective action before failure occurs. Ideally, a pilot program should be allowed to run for approximately three months or longer to show basic results before tweaking it and rolling out to the next area within a plant.

Once a program is working and providing results, larger-sized enterprises may wish to consider investing in an in-house staffed laboratory that will deliver faster results turnaround. MT

Contributing editor Ken Bannister is a Certified Maintenance and Reliability Professional and certified Machinery Lubrication Engineer (Canada). He is the author of Lubrication for Industry (Industrial Press, South Norwalk, CT) and the Lubrication Section of the 28th Ed. of Machinery’s Handbook (Industrial Press). Contact him at kbannister@engtechindustries.com.

learnmoreProcuring the Highest Quality Oil Sample.

Mining Success with World-Class Oil Analysis

Take Oil Analysis to the Cloud

Oil Analysis Data Evaluation

754

3:42 pm
October 15, 2015
Print Friendly

Saudi Armco named Emerson Reliability Program of Year

Saudi representatives receive the Emerson Reliability Program of the Year award at the Emerson Exchange conference in Denver.

Saudi representatives receive the Emerson Reliability Program of the Year award at the Emerson Exchange conference in Denver.

What’s it worth to reduce rotating equipment maintenance by 9%, slash rotating equipment failures by 90%, and eliminate 50,000 man-hours of preventive maintenance activities?

At the Saudi Aramco Ras Tanura Refinery in Saudi Arabia, it represented more than US$10 million annually and resulted in the refinery being named the 2015 Emerson Reliability Program of the Year. “Our results went from very good to excellent when we implemented our reliability program,” said Eyad Al-Basrawi, reliability section head. “Some major accomplishments related to our operational performance KPIs include improving the mean time between failures [MTBFs] for our 48 compressors by 153%, for our 116 turbines by 167%, and for our 1,653 pumps by 53%.”

Saudi Aramco was one of four finalists that presented reliability programs to conference attendees and a panel of judges live this week at the the Emerson Global Users Exchange in Denver, CO. The Exchange presentations were the culmination of three competitive rounds that started with a 15-page questionnaire about its reliability program.

The two runner-ups were CMC Steel South Carolina and Corbion in Blair, Nebraska, with second place going to Exelon’s LaSalle County Nuclear Generating Station southwest of Chicago. During this session, the four companies presented best-in-class reliability programs, highlighting the effective use of reliability-based technologies, effective work processes, integrated maintenance best practices, leadership commitment, and return on investment.

Saudi Aramco’s accelerated reliability program met those requirements and more. “Our responsibility is to supply the demand of the kingdom so 24/7 operations are critical,” said Al-Basrawi. Back in 2013, the Ras Tanura refinery conducted a reliability self-assessment with the help of an outside source. It set out on a fast-track program it dubbed “Reliability Accelerated Culture Enhancement” (RACE) to inculcate a reliability culture at its facilities and improve rotating equipment reliability.

“It is a holistic and aggressive plan that that is aimed to instill the reliability culture and uplift asset reliability in an accelerated manner,” said Al-Basrawi. “RACE is fully developed in-house. We created a reliability team to connect the technology to the people.”

A critical element was the RTR reliability boot camp program. “The attendees spend five days from 7 a.m. to 9 p.m., talking about reliability,” said Al-Basrawi. “It accelerated the development of the workforce and supported transfer of the critical knowledge and key skills. Our operators, technicians and engineers created success story videos that were a very popular way of communicating success.”

In addition to significant improvements to MTBF of rotating equipment, the refinery has seen a 180% increase in defect identification; restructured its vibration monitoring program for a $1.1M cost avoidance; improved air system efficiency by 20%, eliminating $850K of wasted energy; and boosted steam-turbine performance, eliminating $7M in wasted energy.

In response to the judges noting Saudi Aramco made remarkable progress in a short amount of time, Al-Basrawi credited the refinery’s success to the legwork and homework done at the start. “Getting the operators and technicians involved was a big part of the change,” he said.

Navigation