Archive | July, 2008

71

6:00 am
July 1, 2008
Print Friendly

MT News

News of people and events important to the maintenance and reliability community

SERVOMEX APPOINTS HURLEY AS GM AMERICAS

Servomex, a supplier of reliable, high-performance gas analysis solutions to a wide range of industries, has named Charles “Chuck” Hurley as general manager of Servomex Americas, effective immediately. Hurley comes to his new position from gas detection manufacturer Honeywell Analytics, where his previous responsibilities included the design, manufacture, sales and service of industrial gas detection in a variety of roles, including director of global services and European service manager. His appointment is one of several moves by Servomex to emphasize ongoing improvements in manufacturing, supply and customer service. Among these moves has been the recent establishment of three region-specific business centers dedicated to providing an enhanced pre-sales and post-sales support to customers in the EMEA, Americas and Asia Pacific regions respectively. (Editor’s Note: The Servomex Americas Business Center is located in Sugar Land, TX.)

NEW HEADMASTER FOR LITTLE RED SCHOOLHOUSE®

Larry Konopacz has been appointed manager of Training and Education at Bell & Gossett’s Little Red Schoolhouse® in Morton Grove, IL. He succeeds Roy Ahlgren who retired in March of this year. Konopacz began his ITT career in 1983 as a junior CNC programmer in Bell & Gossett’s Manufacturing Engineering Department, and progressed through a series of managementlevel engineering positions. During that time, he programmed many of the major components used in a variety of the company’s products. In 1992, he transitioned from engineering to manufacturing before being named factory manager in 1995. Konopacz is an ITT certified VBSS Black Belt and Lean Master. He holds BS and MS degrees in Industrial Technology from Western Illinois University, and an MBA in Managerial Accounting from DePaul.

SKF SET TO ACQUIRE PEER BEARING COMPANY

SKF has signed an agreement with the owners of U.S.-based PEER Bearing Company (PEER) to acquire PEER and its manufacturing operations in China and Thailand. Headquartered in Waukegan, IL, PEER primarily manufactures deep groove ball bearings and tapered roller bearings, most of which are sold to North American customers. In 2007 the company had approximately 1400 employees and sales of almost $100M. According to SKF, the acquisition is expected to strengthen the corporation’s presence in certain North American market segments that it doesn’t currently serve, including Mechanical Power Transmission. PEER will continue to operate as a stand-alone business, acting independently on the market under its existing PEER brand. The proposed transaction is subject to certain conditions to closing and requires approvals by relevant authorities.

MOTOROLA MAKES INVESTMENT IN APPRION

Motorola, Inc. through Motorola Ventures, its strategic venture capital arm, has joined a number of other groups to invest in Apprion, Inc., a supplier of advanced wireless products, applications and services for industrial plants. (Apprion notes that its IONizer product was the process industry’s first industrial-grade, multi-RF, wireless network appliance.) With the Motorola investment, Apprion has raised over $23.5M. Other participants in the Apprion funding include Anvil Investment Associates LP, CTTV Investments LLC, the venture capital arm of Chevron Technology Ventures, Advanced Circle 65 or visit www.MT-freeinfo.com Technology Ventures and Allegis Capital.

ASSOCiation News: TYCO FLOW CONTROL OPENS NEW SERVICE & REPAIR CENTER

Tyco Flow Control (TFC), a business of Tyco International Ltd., is opening a new 42,000-square-ft. service and repair center at 9560 New Decade, in Pasadena, TX this month. The facility will provide all service and repair operations previously performed at the company’s Pasadena distribution center. This expansion not only allows TFC to continue providing ongoing service for pressure relief valves and tank vents, but also integrates its field service, quarter-turn, automation and control valve repair capabilities. Some of the site’s square footage will be used for two new testing stands capable of testing valves up to 30″ and for additional machining capabilities. MT

Continue Reading →

188

6:00 am
July 1, 2008
Print Friendly

Demystifying HVAC Drive Anomalies

0708_maintnenancelog_img11A new handheld test tool makes high-end electrical troubleshooting easier than you thought.

If anyone can wring every last ounce of functionality out of a piece of electronic test equipment, it’s Chris Vogel. At Siemens Building Technologies, Vogel has his work cut out for him keeping HVAC systems running at their peak for the company’s large commercial customers during peak Florida weather marked by seemingly nonstop 90 F temperatures and 95% humidity. And that’s just one of the challenges faced by technicians at Siemens Building Technologies, which plays a more sweeping role in its customers’ success: ensuring energy efficiency, comfort, protection against unauthorized access, and fire safety year-round for every building or office

Vogel, an HVAC technician, becomes energized when discussing the return on investment in his handheld ScopeMeter® Test Tool. “Out at one large site, where we monitor and troubleshoot variable frequency drives (VFDs), component-level repair can often mean the difference between a $20 repair part and a $100,000 repair bill. I know firsthand, because we recently documented that very scenario.”

On large VFDs, Vogel uses his ScopeMeter to uncover capacitance problems, transistor firing mishaps and even bleed-throughs on a gate. “Of course, a transistor is basically a lightning-fast switch,” he says. “It switches back and forth between open and closed, and it can sometimes start to break down. When that happens, motors will start doing weird things. For example, at load stage. We’ll actually see the motor banging back and forth as if it is not sure which way to turn.”

Multitasking problem solver
Vogel thinks it’s important that a technician be able to characterize VFD problems by capturing a waveform from the offending drive. His premise: A signal is much more telling when presented in a waveform view than in a single static voltage reading. The signal has a shape and value that may look right at a glance, but could just as easily have a distortion or rough “edge,” or a commentary spike almost too short to be seen. Either problem, or a host of other signal anomalies, would be indistinguishable with just a numeric reading of the signal.

“The scope allows me to record information from a number of sources—sine waves on the VFD inputs and outputs, current and voltage—and compare it, so that I can derive a power factor for the circuit,” Vogel illustrates. Vogel is able to store up to 25 permanent records for recall at any time. “Sometimes I will see a suspect waveform and say, here’s what it looks like during this slice of time, but here’s what it should look like.” With that, he recalls a stored image of the same waveform, recorded when the drive was operating properly. “Storage scopes create a graphical representation of the problem, versus a merely empirical value that a multimeter would show. Of course, with ScopeMeter, we get both.”

With the weather patterns and lightning in Florida, Vogel explains, it’s not uncommon for line voltages to rise and fall precipitously. “We were working on a current source drive that I wanted to retrofit, because it had taken a hit from a lightning bolt.” At the time, technicians suspected that the drive had been damaged beyond economical repair, and they decided to replace the drive itself, but not the main feeds.

0708_maintnenancelog2Shortly after the drive began to ground-fault and failures in the building’s electrical distribution network began to show up, the customer asked Vogel to come out. After doing some low-level diagnostics—throwing amp clamps on the wires and comparing phases and phase draws—he placed a ScopeMeter on the system and discovered there was a lot of line notching (Fig. 1) going on.

Vogel explains that nonlinear ac loads—loads in which voltage and current are out of phase—create harmonic distortion (see Fig. 2 on the following page). Examples of nonlinear loads include welders, VFDs and battery chargers. Distortion is a result of the non-sinusoidal waveform the drive generates, Vogel notes. He goes on: “Any time you have long conduit runs, the wires create magnetic fields around themselves. With harmonic distortion, current is actually reflected back into the wiring. It becomes a self-sustaining loop. (As shown in Fig. 1), that’s what we call line notching. As you switch the ac current on and off, it’s the equivalent of opening and
closing a valve on a water pipe very fast, causing pulsations
in the flow. Line notching is the electrical equivalent
of that phenomenon.”

Circling back to the original problem, Vogel notes that the high current for each of the three phases had led the original installers to use four parallel conduits for each phase. In such a configuration, a smaller conductor for each phase would typically have been run down a single conduit, with multiple conduits going to the equipment and each smaller conductor terminated on a terminal block for its appropriate phase. But instead, the installers had run feeds A and B in one conduit, B and C in another and C and A in the third. “The drives were passing almost 42 A to ground, causing them to trip on ground faults and over-voltages,” Vogel says. “Of course, with the phase conductors running through conduits and the sheer number of conductors (sixteen 500 MCM runs), they were concealed, and nobody had thought to look further.”

0708_maintnenancelog4

A better look at power factor
Vogel recently was called on to solve a power factor problem in a large commercial building where a number of aging 250 hp chiller motors were in service. In high ambient weather conditions, these old chillers would load up, and Vogel could use ScopeMeter to see the phases moving farther and farther apart (Fig. 3).

As the chill water temperature came down, the power factor would drop nominally from about 0.7, which was acceptable, to about 0.32—the lowest Vogel had ever seen it. Then, as he ‘staged’ the equipment downnamely, drives on the cooling towers, drives on the primary loop pumps and drives on the primary chill water system—the phases would come back in sync and the power factor would rise again.

“You can view readings on the meter,” Vogel notes, “but you don’t understand what’s causing the powerfactor drop until you look at the waveform itself.” As he tells it, you can see the field collapsing as the motor winds down, and you can see the current and voltage phases come closer to being in sync (Fig. 4). As the power factor comes back and approaches 1.0, it’s fascinating to watch, and the customer is more likely to understand the problem. More importantly, it helps one understand how to correct the problem.

Capturing the benefits
One of Vogel’s new projects is to install power-factor correction capacitors on an MCC (Motor Control Center) panel at a utility customer’s site. The capacitors will be installed in parallel with the connected circuits. This is not just about improving power factor, but about keeping costs in line.

Many electric utilities charge building owners a penalty for low power factor. (One utility, for example, charges building owners $0.14 per kVAR hour when power factor drops below 0.97.)

According to Vogel’s calculations, with an added 65 kVARs of capacitance, it’s about a $200,000 proposition to add these caps. The customer is running two 800-ton machines fully loaded during the peak of summer in 95 F Florida heat and 90% humidity.

0708_maintnenancelog5Essentially, the customer’s air conditioning plant is running at 100% electrically, but not mechanically, Vogel points out, noting that the customer’s electric bill varies from $50,000 to $60,000 a month. “We determined that, if we can increase the power factor on this panel to 0.85, the customer’s electrical consumption will drop by almost one-third. That correction, considering the utility’s high power consumption, will give them a payback period of less than one year. And, they could get additional capacity without any work on the mechanical system!”

ScopeMeter, Vogel says, is what identified the problem. “We took it to the customer and said ‘Hey, as we stage these motors down, as we shut things off, your power factor starts to rise again.’ First, we measured the signal on the MCC panel, and then we measured the signal on their main power panel. We set the same function up on the chiller plant, and we could see the power factor clean up.”

Vogel confirms that everyone now understands the nature of the problem, explaining that he had directed the customer to an Internet site where he could calculate his own energy savings from improving power factor. “Next, we stood by as the customer observed the current dropping with modifications to the panel, not to mention that they started to see immediate reductions in total kilowatts used. Down here you can’t beat the heat, but you can make it a little bit more palatable.”

“You’ll laugh, but I’m a union pipe fitter by trade,” Vogel says, “and here I am doing high-end electrical troubleshooting. The ScopeMeter has taken my trade in a whole new direction.” MT


Hilton Hammond is product manager for Fluke Precision Measurement. A technical expert on ScopeMeter test tool products, LCR meters and video test equipment, he has worked for Fluke Corporation for nine years. Originally from South Africa, Hammond began his career in calibration. Telephone: (425) 446-5381; e-mail: hilton. hammond@fluke.com

Continue Reading →

601

6:00 am
July 1, 2008
Print Friendly

How Clean Is The New Oil In Your Equipment?

It’s a nagging, industry-wide question, and one that keeps many a supplier and end user up at night.oil-cleanliness-basics

In the multi-step process of moving lubricants from THEIR tanks to YOUR equipment, where does contamination start? At what point do dirt and/or moisture enter the supply chain? Is it a problem with storage, handling, dispensing or a combination? This three-part series aims to answer these questions once and for all. Based on studies of actual field data of the cleanliness of new oil put into equipment, it will provide recommendations on how to more effectively guarantee cleanliness in the future. A continuing themme in this series will be the fact that it takes a strong, cooperative effort among lubricant supplier, distributor/marketer and end user for any oil cleanliness program to be successful (see Fig. 1).

Most lubricants purchased today come from a distributor and are delivered in the following ways:

  • Bulk shipments from the lube blending plant delivered directly to the customer
  • Bulk shipments from stored lubricants at the distributor
  • Drums and pails filled at the blend plant and delivered by distributor
  • Drums and pails filled at distributor from oil in tankage How the lubricant is delivered by the distributor will have a major impact on oil cleanliness.

0708_contamination2The lubricant blender also plays a key role in oil cleanliness.

Typically, turbine and hydraulic oils are sent out of the blend plant at a cleanliness of 19/17/14. Once it is put in trucks or drums, the delivered oil will not be as clean. (One major manufacturer that is filtering hydraulic oil and putting it in new sealed steel drums, however, is achieving a cleanliness rating of 14/11/9. There is a cost for this procedure, but customers know they will receive very clean hydraulic oil as a result of it.)

Some companies may require special handling of their oils. A case in point is General Electric, which has a minimum cleanliness rating for turbine oils of 16/13. This is achieved by delivering filtered turbine oil to GE in a dedicated bulk truck. Lubricant suppliers are providing this service either directly from the blend plant or through filtration at the distributor.

The end user also has a responsibility to maintain oil cleanliness. Oil can become dirty very quickly if it is not handled or dispensed properly. The customer needs to cooperate closely with the lube blender and distributor to develop a program achieving targeted oil cleanliness levels economically.

Scope of this study
In our study, new lubricants are being evaluated for two major contaminants: particles and water. All laboratory test work is being conducted by MRT Laboratories, an ISO 17025-2005 certified laboratory in Houston, TX. The following tests are being performed:

  • Viscosity @ 40 C
  • Karl Fischer Water Determination
  • ISO 4406 Particle Count
  • Emission Spectroscopy

The following samples were purchased from four major lubricant manufacturers for evaluation:

  • ISO 32 turbine oil
  • ISO 46 AW hydraulic oil
  • ISO 220 EP gear oil
  • ISO 100 R&O oil

0708_contamination3As shown in Fig. 2, the lubricant flow through a distributor operation is being examined for both water and particle contamination. The major focus will be on turbine and hydraulic oils. Fluid cleanliness will be examined at each stage to determine the effect of storage and handling on contamination

The final phase of the study will be focusing on end user handling of lubricants. Very clean fluid can be delivered to the plant, but without proper handling all efforts for clean oil are wasted.

Lubricants at several end-use facilities will be examined to determine the introduction of contaminants at the various stages of lubricant dispensing (as indicated by Fig. 3). The use of filters and filter carts in the achieving of fluid cleanliness targets also will be examined.

After all study data is collected, recommendations will be made on the optimum way to achieve fluid cleanliness in the most economical way. Subsequent installments in this series will address best practices for lubricant blenders, distributors and end users.

0708_contamination_img4ISO 4406: 1999 Cleanliness Code
Cleanliness will be measured by the use of an optical laser counter that measures the number and size of various particles. Although this procedure was discussed thoroughly in a previous article on oil cleanliness (see pgs. 34-35, Lubrication Management & Technology, September/October 2007), it will be reviewed here.

0708_contamination5The data in Table I are used to assign a cleanliness code number for a fluid:

The = 14 micron. The number of particles is measured with a particle counter and recorded by size per milliliter of fluid. Take, for example, a fluid with the following particle count:

= 4 micron = 8500/ml
= 6 micron = 1650/ml
= 14 micron = 300/ml

The shorthand notation according to ISO 4406:1999 would be 20/18/15 for this fluid. A lower number represents a cleaner fluid. Note, too, that a one-number increase in the cleanliness code represents a doubling in the number of particles. The other articles in this three-part series will utilize this code to represent fluid cleanliness.

Conclusion
Oil cleanliness is a very timely topic. Many end users today are demanding cleaner oil without understanding the costs involved. The next articles in this series will address the issue of the cleanliness of oil currently supplied and best practices to assure that the oil will be clean when put into the equipment. The relationship between the lubricant supplier, distributor and end user needs to be cooperative and not adversarial. They all need to work with one another to assure clean oil at an economical cost.

Realistic cleanliness goals need to be established by equipment type before any program is implemented. A total program needs to be established, including the use of proper filtration when the fluid is in the equipment. This filtration also has been discussed in a previous article (pgs. 8-12, Lubrication Management & Technology, November/December 2007). Like everything else, effective filtration requires a strong cooperative effort between the end user and the filter manufacturer.

The second installment in this series will appear in the October issue of Maintenance Technology. MT


Ray Thibault is based in Cypress (Houston), TX. An STLECertified Lubrication Specialist and Oil Monitoring Analyst, he conducts extensive training in a number of industries. Telephone: (281) 257-1526; e-mail rlthibault@msn.com

Mark Graham is technical services manager for O’Rourke Petroleum in Houston, TX. Telephone: (713) 672-4500; e-mail: mgraham@orpp.com

Continue Reading →

215

6:00 am
July 1, 2008
Print Friendly

Stretch Your Shrinking Budget With RCA

Root Cause Analysis could be one of the strongest tools that your maintenance organization ever puts its hands around.

0708_root_cause_img1

When economics inspire belt-tightening, corporate leadership often cuts programs that don’t scream savings and profit. After all, those programs cost money to implement and maintain, and their effectiveness and return on investment often is unproven.

For a program to survive this scrutiny, it must stand on its own value. Root cause analysis (RCA) is one such program. Executives who are not close to the RCA process might notice only the expenses for employee training (or perhaps they only notice the high profile RCAs that occasionally occur). People closer to the RCA program have an intuitive sense of the program’s value—enough to know that it should serve as the cost-cutting tool rather than becoming a victim.

When management is evaluating the maintenance function, it may be fairly easy to see how RCA is helping to cut costs, but not so obvious that it is generating revenue. Historically, many managers have not shared information about business revenue and profit margins with the maintenance team. In some cases, the managers themselves do not know the profitability metrics. When this occurs, maintenance teams might not be aware of their own bigger-picture revenue impact. Thankfully, this situation is changing.

So, how can RCA program champions in maintenance develop a tangible understanding of the associated benefits, cost savings and profit generation within the context of revenue goals? Further, how can the RCA champions effectively inspire senior leaders to recognize the returnon- investment? What exactly are the results, why are they worth calculating and sharing, and how is this best accomplished? Can the case be presented powerfully enough for executives to recognize that investing more in the RCA program will pay off in the short and long term?

Sample RCA results
Client savings data indicates that many companies see the immediate return on money invested in RCA training. In our company’s experience, a fair estimate of initial training costs per person—including software and training courses—is about $1500. In most cases, if one trained person completes just one RCA and implements solutions, the savings alone pay for an entire group training class twice over. So there’s an immediate 100% return on investment (ROI). This return grows exponentially when additional people from the same class perform RCAs on a regular basis.

Once the RCA program is up and running, the paybacks start to roll in, as the following companies reported.

  • A global chemical company evaluated more than 100 RCAs performed by its reliability engineers and found the average value returned on each to be $75,000 USD per year. The average cost per RCA, including solutions, was $1500, yielding an ROI of 4900% after one year. RCA also is a key part of this company’s safety program, where it has realized more than a 75% improvement in its injury and illness incident rate—from 2.4 to 0.59—in an eight-year period.
  • A second global chemical company found that each RCA resulted in $17,000 USD per year savings by eliminating maintenance problems. This organization’s average ROI on each RCA was 1100% after the first year.
  • A manufacturing company saved $1,300,000,000 USD through RCA, by discovering an innovative solution to one of its product problems. In that same RCA, this company also discovered $19,000,000 USD per year in previously unknown waste that could be eliminated.
  • A global telecommunications company has saved millions of dollars by using RCA to analyze and correct problems in its global mobile phone and networking business through reduced service interruptions and outages.

Calculating results
It’s surprising so that many companies fail to calculate and communicate payback or ROI on RCA, considering the impressive results these types of analyses often deliver. Plain and simple, there’s a perception that it’s too difficult—or even impossible—to obtain the data needed. For most of us who don’t design rockets for a living, having exact data is rare. (For those who do design rockets, having exact data also is a rarity.) Still, by using conservative data, an organization should be able to develop defendable metrics that demonstrate the value of its RCA. Remember, data is used to draw conclusions for the end-goal of making a decision. With a little digging, sufficient data normally is available to make very solid decisions—resulting in admirable payback.

Herein lies an opportunity to understand that you can be conservative—and, accordingly, relatively accurate—without having exact data. When calculating payback, you need close estimates. If an organization spends a great deal of time seeking the ultimate precision in this data, it likely is spending more time than the situation warrants. The result is diminishing utility from the additional precision, as well as from problems that are allowed to linger on a little longer and cost a little bit more. Penciling in conservative estimates is the safest. Even if other people reviewing the data are inclined to poke holes in your approach, you can objectively respond that the savings or income is probably much higher.

Since arriving at these figures may seem easier said than done, why and how did the previously mentioned companies do it? Calculating the results of qualitative programs enables their champions to evaluate program effectiveness individually and collectively. Calculating ROI illuminates needs for program improvement and—when done thoroughly and reasonably—earns credibility within the organization.

Return on investment Return on Investment (ROI) = Net Savings/Cost x 100 Where:

Net Savings = Annual Cost of a problem before RCA minus annual cost of problem after RCA solutions are implemented minus cost to implement RCA and solutions

Cost = Annualized Cost of: (RCA + Solution + RCA training)

  • 0708_root_cause_img2

    What are your initial costs, including training, software and hardware? For RCA training costs, if you don’t know for sure, a one-time cost of $1500 per person is common.

  • What does it cost to conduct an RCA?
    • When in doubt, guess high to generate a conservative estimate that will be considered credible. It would not be out of line to see the following:
      • Four people might each spend 2.5 hours involved in a single RCA, totaling 10 hours. In addition, the facilitator might spend approximately 10 hours researching the problem, securing the RCA team, preparing for the RCA and writing the follow-up report. Estimating that each of those combined 20 hours is worth about $100, the cost of this RCA would be approximately $2000.
  • What are your assumptions regarding your capacity and value?
    • For instance, if you implement a project that streamlines a process and increases capacity, did the additional product sell? When there are incremental increases in throughput, uptime, equipment reliability and maintenance savings, there may be revenue improvement. The key is whether enough demand exists so that all of the product you are able to make is sold. Every additional unit sold contributes to the bottom line. The value of the additional profit resulting from the additional sales should be included in the RCA “value.” In the eyes of executives, that’s where the real value is.
    • Use product profitability values that are recognized by the business’ accounting department, when possible. These are the numbers common to the leaders—and what are typically used in other business decisions. If your leaders are unaware of the profitability numbers, kindly seek out the business accountant who does know the numbers and then share that data broadly. Once people understand the business profitability numbers, better “spend” decisions will be made across the board.
  • Will you use revenue or Net Profit in your calculations?
    • Net Profit is recommended. In its simplest form, Net Profit is the price you charge your customer for your product minus the cost to produce it. It’s important to factor in a “cost of goods sold” that includes overhead, utilities, labor, raw materials, etc. If you simply use “revenue” (product sales price multiplied by the sales volume), your numbers will be very high (probably four to five times too high).
    • When calculating your costs, include expenses related to analysis, problem-solving and solutions. In a typical equipment failure situation, an average maintenance shop might add up the costs of equipment, parts and labor. That’s a good start, but it’s not the whole picture. There are many other ancillary costs that are important to tally, such as safety inspections, insurance premium increases, fines and litigation. What is the total cost to the organization beyond the maintenance department?
  • What are your safety assumptions?
    • What value do you place on a near miss, OSHA reportable or lost time injury? A commonly used figure is $35,000.
  • Will you include manpower—you would pay the individual regardless?
    • Be careful! You should only take credit for maintenance labor savings if your organization, as standard procedure, reduces headcount as reliability improves and work is eliminated. For example, if the solutions from your RCA on a chronic compressor failure completely eliminate future failures, unless you reduce the paid hours of your full-time or contract maintenance personnel, you are not saving maintenance money and should not take credit for this in your payback calculations.

Example ROI calculation
The following problem description and calculations illustrate the return on investment from a successful RCA. A product dryer was experiencing 30 failures per year. Lost profit from lost product sales due to dryer downtime was approximately $750,000 per year. Out-of-pocket repair costs were running approximately $150,000 per year. The RCA resulted in a solution with a capital cost of $180,000 and an annual operating cost of $10,000. The RCA costs (team meeting and lab testing time) totaled $25,000. The failure rate after solution implementation went to less than one per year. (Note: a conservative assumption of one failure per year will be used.) Assume five-year life for capital, RCA and training costs. (Note: a conservative assumption will be made to charge all training costs against this RCA. In reality, this cost would be spread over many other RCAs.)

So, if we annualize one-time costs over a five-year period we come up with the results in the worksheet shown in Table I.

Beyond the numbers
The ROI reflected in the sample worksheet in Table I actually is low compared to the average return on investment for RCA. That ranges between 2000% and 3000%. Thus, if your maintenance organization is seeking to stretch a shrinking budget, Root Cause Analysis can be one of the best tools you have. RCA will not only reduce costs, it will improve net profitability when applied to capacity-limiting problems. If you are not performing RCA—or if you’re under-utilizing it—and you are feeling the pressure to cut costs and show value, RCA should be high on your list of priorities. MT


Chris Eckert is president of Apollo Associated Services, an innovator in root cause analysis training, consulting and investigations. Formerly a reliability engineer with Dow Chemical and Rohm Haas, Eckert is a registered Professional Engineer and a Certified Maintenance and Reliability Professional. Telephone: (281) 218- 6400; e-mail: ceckert@apollorca.com

Continue Reading →

246

6:00 am
July 1, 2008
Print Friendly

Protecting Critical Machinery: The Value Of A Complete Solution

Online vibration monitoring integrated with process control and combined with shutdown protection, predictive maintenance and performance monitoring is a sure-fire way to keep your rotating equipment up and generating revenue.

When a steam turbine in a Midwest power plant went down without warning, half of the plant’s production was instantly lost for months (along with substantial revenue from the power that should have been generated).

Could the outage have been prevented? Apparently so. Plant management immediately went shopping for a new online system that would not only monitor the turbine’s operation continuously, but gather diagnostic data capable of revealing unrecognized internal problems in time for corrective action to prevent a similar failure in the future.

In this day of advanced technology, it is both possible and essential to access decision-making information about the operating condition of critical equipment—not just a “trip” signal that comes only after significant internal damage has actually occurred. Some companies are putting productivity at risk by continuing to rely only on “protection” systems for their critically important turbomachinery. Protection is vital, but it is only part of the complete solution for turbomachinery.

A complete strategy for protecting critical machinery covers three real-world scenarios using four monitoring components. These real-world scenarios are:

  • Unpredictable events
  • Predictable events
  • Controllable events

Unpredictable scenarios are events that happen suddenly and without warning. For example, a metallurgical imperfection or slug of water from the boiler may cause a blade to snap suddenly. If such an event occurs, a decision to trip must come instantly and be integrated with process control to orchestrate the machine, area, or plant shutdown. In addition, machine health information gathered before and during the trip will aid the assessment of what happened.

Predictable events are machine malfunctions that are detected and tracked months in advance of a planned outage. Maintenance planners use this information to identify the area of the fault and fault type, to gauge its severity, order parts and plan the outage. When machine malfunctions in this category are monitored, business decisions can be made to continue running the machine and possibly damage the machine versus determining the optimal time for scheduling the outage, manpower and parts. In parallel, the protection system is monitoring for a sudden turn for the worse to protect from catastrophic failure.

Controllable events represent a class of scenarios that provide the largest return on investment for monitoring capital outlay. In addition, controllable scenarios provide the best opportunity to optimize processes and performance. For example, on an unusually cold day, the operator ramps up the turbine and receives an oil whirl vibration alert from the predictive vibration monitoring system and simultaneously sees a low temperature alarm from the process control system on that same bearing. This is a controllable scenario, and the operator knows exactly what to do. Reducing the RPM of the turbine will immediately stop oil whirl from damaging the bearing. Solving the low oil temperature problem will keep the turbine out of the damaging oil whirl condition when the turbine is brought back online. In controllable scenarios, an operator simultaneously has both machine health and process status/health and is able to avoid problems that would otherwise lead to degraded machine health.

The four monitoring components required for a complete solution are:

  • Protection monitoring
  • Prediction monitoring
  • Performance monitoring
  • Integration of the above to process control

Predictive maintenance of rotating assets is best practiced using information gathered through vibration monitoring. Sometimes, this data signals big trouble down the road, allowing analysts to make a judgment as to when a failure might be expected. Based on their prediction, immediate repairs may be necessary in time to avoid the failure. It may be possible to delay repairs until a scheduled plant shutdown— or let them go altogether. Ultimately, this technology helps the plant and maintenance managers make business decisions about what to do—and when and how to do it. The result is generally a far less expensive proposition than reacting after something breaks.

Yet, according to a Deloite & Touche study, more than 50% of industry maintenance man-hours are spent fixing equipment after a failure has occurred, whereas less than 18% of those hours are spent determining when equipment might fail and acting accordingly. Those numbers will improve as more maintenance departments implement solid predictive maintenance programs based on online vibration monitoring of key machines.

The “most critical” category usually involves only about 5% of rotating assets, but this small number of machines represents an easy target for a complete online monitoring solution. In layman’s terms, it’s like picking the proverbial “low hanging fruit”—with enormous financial returns on a single “find” with a controllable outcome.

0708_cas_img1

Online monitoring
Continuous online monitoring of rotating machinery represents technology well beyond systems that provide only periodic snapshots of an operation. Yet, some critical situations can be averted only if a stream of data regarding the current condition of the equipment is available. Fortunately, it is now possible to continuously obtain information about the health of a whole range of gas or steam turbines, generators, compressors, fans, motors, pumps and the like (see Sidebar). Equipment essential to the success of the operation can be watched automatically for changing vibration patterns and rising temperatures—sure signs of impending trouble.

Some of the earliest automated monitoring systems were dedicated to expensive steam-driven power generating turbines. Data received directly from a machine are stored on a hard drive, buffered and presented in a variety of plots that depict exactly what is occurring within that machine. Maintenance engineers and machine specialists suddenly had never-before-available information to use in analyzing changes in the machine’s operation.

When properly interpreted, these signals will pinpoint the location, nature and the severity of developing problems. Data from automated monitoring systems enable plant personnel to predict with greater accuracy when a machine will need maintenance to prevent damage and avoid lost production. Machinery health management recognizes the significance of each machine in a production environment, focusing greater attention on those machines that, if stopped, would likely shut down all or a major section of the plant. Online monitoring assures that the condition of these machines is being assessed continuously.

Performance monitoring
Another technology that can be applied to protect critical machinery compares machine performance with a thermodynamic efficiency performance model. Compressors, boilers and steam or gas turbines are the most commonly modeled types of equipment, but a thermodynamic model can be computed on literally any machine in a plant. Equipment performance deteriorates primarily due to fouling or build-up on blades and other surfaces, thus leading to less efficiency. The consequence is more energy usage and potential lost throughput.

Equipment performance monitoring systems use existing process measurements, pass them through the thermodynamic model and provide a true picture of how well that machine is actually performing. In other terms, actual efficiency loss versus design for the given operating conditions is determined. While plant personnel may be aware that the performance of a piece of equipment is below normal, they may not know the significant cost of lost heat rate and excess energy usage. This information also can help lead to the root cause of degradation.

The most important element of performance monitoring is the expertise required to build the thermodynamic model and then distill and validate the large amount of input data. By utilizing the performance model to analyze this information and formulate actionable recommendations, performance specialists are able to identify lagging performance that has not been recognized by either production or maintenance personnel.

Because the model input data comes from the existing process measurements commonly found already in the site’s historian, the data can be analyzed by either onsite systems or remotely with off-site specialists. Analysis based on thermodynamic modeling also enables a specialist to predict with reasonable accuracy when a piece of equipment needs to be taken out of service for either recovery of lost efficiency or a comprehensive overhaul. A machine’s future performance is evaluated based on its history in order to predict when the efficiency of that unit will drop below a certain financial or performance threshold, signaling when it should be taken out of service. In this way, performance monitoring complements predictive maintenance.

Pulling it all together Let’s look at how a complete solution like the one described in this article would work in a typical turbomachinery application. In Fig. 1, the sensors mounted to bearings on a critically important machine provide a continuous flow of vibration measurements. A large turbo generator may have more than 10 bearings with two sensors at each bearing plus other unique instrumentation—like speed sensors, differential expansion sensors and case expansion sensors. There could be as many as nine different types of measurements at various locations down a machine train.

The cables leading from these sensors are connected to new online monitoring hardware that is the foundation for the complete online solution. By measuring for detailed vibration, in addition to peak vibration, the new turbomachinery protection system, which is intended as a retrofit on shutdown systems, has the ability to recognize developing machinery conditions as well as detect a severe condition requiring a shutdown to protect the machine.

Ideally, the signs of potential failure have been observed, predicted and attended to so that vibration never gets to the level where “protection,” i.e. shutdown, is necessary. In the rare unpredictable scenario of a rapid catastrophic failure, the machine is protected.

Machinery health parameters are integrated with the plant’s control system. For the first time, vibration monitoring becomes an extension of the central control system, which often monitors temperature, pressure, load, etc., any one of which could be symptomatic of a problem. Vibration monitoring actually monitors the position and the motion of the shaft inside the bearings. That information is now integrated with the control room, making operators aware of what is happening deep inside a critical machine—such information is of much greater value than just the symptoms of degrading performance.

Up to 50% of machinery problems are process induced. If they are not caused by operators directly, they are the result of standard procedures used by control room personnel. When adjustments are made under these conditions without machine health feedback, tradeoffs occur. Improvements are made to production, but operations personnel are blind to the stress placed on machinery health.

When the operators have real-time supervisory and vibration parameters at their disposal, they can observe the impact of process adjustments on a machine’s health and learn what steps can be taken to actually improve performance. For example, during the start-up of a turbine, if case expansion or rotor eccentricity levels are not within acceptable limits, operators can make realtime adjustments to ramp rates and also make business decisions to optimize the ramp rate versus the impact on machinery health. Informed real-time decisions are best made when vibration data is integrated with the process automation system.

Conclusions
For the most critical rotating equipment in the plant environment, three scenarios must be accounted for: the unpredictable, the predictable and the controllable. The complete solution covers all three scenarios by providing protection monitoring, prediction monitoring and performance monitoring all integrated with the process control system.

Monitoring systems utilizing advanced predictive technologies are giving end users newer, faster and more complete methods for analysis and automated analysis—information that can be acted upon. MT


Deane Horn is a systems engineer for Emerson’s Machinery Health Management group. He’s been with the corporation since 1997, when he joined Emerson Process Management’s Asset Optimization Division as an online systems consultant supporting domestic and international. Prior to that, Horn spent eight years with Westinghouse Electric Corp. working with systems test and integration. He received his BSEE from the University of Tennessee, Knoxville.

Continue Reading →

Navigation