Archive | September, 2005

252

6:00 am
September 1, 2005
Print Friendly

Lessons From the Busted Knuckle Garage

Overheard in a restaurant recently: “Did you hear about the Auto Service Center in town? It closed last week. The guys who owned it finally decided to retire. Remember how many years we looked for good mechanics? They were real good, weren’t they? Never disappointed us. Always found a problem before something fell apart. Prices were fair, service was fastÑwe never had to wait on parts. They really knew what they were doing, didn’t they? Why, they were ASE-certified and factory-trained to work on our cars. They had the latest factory technical bulletins and recall notices, and they notified us whenever it was time for our regular service. They treated us so well all these years; we’re really going to miss them. Where do we go, now?”

The concern voiced in this conversation makes sense, doesn’t it? When you finally discover a good mechanic or service center to take care of your personal car or truck, you really can tell the difference in how well it runs. It gets better mileage. The tires last longer. Major problems rarely happen. The mechanics seem to take personal pride in their work—and it shows!

What if you had to take your vehicle to the “Busted Knuckle Garage,” where the mechanics aren’t trained to work on your make and model? Where they don’t have the right technical information or maintenance and repair manuals? Where the goal is to fix things fast and cheap, and “accuracy” and “workmanship” are not priorities? Where it might require three to four trips to the shop before a problem is actually fixed? Where the wait for parts could be several hours—or several days? And what about those greasy finger prints left on the steering wheel, door and hood? We would tend to avoid this type of service, wouldn’t we?

Sadly, in many plants and facilities today, we’re doing “Busted Knuckle Garage” service on our most critical equipment, yet we’re still expecting it to run right!

Why is it that companies can spend millions (sometimes hundreds of millions) of dollars on equipment and facilities and still come up short when it comes to maintenance and training? Why do the decision makers assume that “Charlie” and “Ruth”Ñwith their years of experience in the plant—can just “figure things out” on the newest high-tech equipment? But, don’t stop there. Let’s throw another monkey wrench into this example.

What if the operators aren’t trained on the equipment-specific requirements, settings, specs and set-up procedures? (Untrained operators are causing more and more equipment problems in plants today.) Then, send in some untrained maintenance folks to fix the problem. You guessed it; they don’t have the right tools or parts, either.

At this point, the equipment has to run longer and harder to make up for all the unplanned downtime. Monthly preventive maintenance tasks are deferred, for weeks—or even months. Consequently, the equipment breaks down even more.

It is a vicious cycle. Airplane pilots know this as a “death spiral” that you can’t pull out of. How, then, can we possibly expect our equipment to run right and our costs to be low?

Here’s how you can avoid the “Busted Knuckle Garage” syndrome:

  • Focus on your plant’s most critical, most problematic, constraint equipment in the plant.
  • Gather the equipment documentation and bulletins.
  • Identify the skills and knowledge required to make it run right—both operations and maintenance.
  • Train the right people in the right skills.
  • Identify the proper operations and maintenance procedures.
  • Train the right people to properly perform the tasks.
  • Identify the proper PM tasks and intervals.
  • Train the right people to perform the right PMs.
  • Plan and schedule the PMs.
  • Fix things that are broken, jumpered, bypassed, leaking or worn out.
  • Stock the right spares and keep them in serviceable condition.
  • Document all work on the equipment in the work-order system.
  • Use equipment repair and maintenance history to identify and address weaknesses.
  • Then, when you really get down to it, look at TOTAL DOWNTIME losses. You may see what I’ve seen for years: maintenance can’t do it alone. More than 90% of major equipment losses are outside the direct control of maintenance.

Finally, be sure to share the “Busted Knuckle Garage” story with top management. Where would they prefer to take their cars for service?—Robert Williamson, Strategic Work Systems, Inc., e-mail: RobertMW2@cs.com; Internet: www.swspitcrew.com

Robert Williamson is an internationally recognized consultant, author and educator for modern manufacturing. His 30 years of experience with the “people-side” of production operations and maintenance im-provement include work with more than 300 companies and sites.

Continue Reading →

444

6:00 am
September 1, 2005
Print Friendly

Ultrasound + Infrared = New Heights for PdM

The variety of uses for these two dynamic technologies and the tools that facilitate them is astonishing. Using them in tandem is a great way to increase the success of a PdM program.

These days more and more companies are outsourcing their predictive maintenance (PdM) programs. Perhaps they don’t have the manpower and time to devote to a comprehensive PdM program. Or, as the pace of change in available technologies rapicly increases, just keeping abreast of the latest developments could be a full-time job all by itself.

PdM programs reduce repair costs and downtime, while optimizing safety and reliability. Two predictive maintenance tools essential to such programs are the airborne ultrasound probe and the infrared thermal imager. Used individually, either of these tools can provide good results. Based on my own experience, using them in combination is better.

The tools of our trade
My job is to locate problems and provide qualitative data so that a company can make data-driven decisions regarding maintenance timing. Working at a taconite mine, we would use airborne ultrasound probes and thermal imagers for anything from motor and conveyor bearings, to air leaks, to all components of our electrical power systems. These instruments also can be used with steam traps, roof leaks, moisture problems and energy audits.

How much does an unplanned shutdown cost? At the taconite mine, lost production costs ranged from $1000 to $5000 per hour.

What does it cost if a motor bearing fails and not only shuts down production, but damages the windings and/or the shaft of the motor as well? Using airborne ultrasound and/or thermography to locate anomalies and impending failures before they occur allows a company to schedule maintenance efficiently. This helps maximize production, lengthen equipment life, and improve safety.

Airborne ultrasound. These instruments generally sense sounds in the frequency range of 20 kHz to 100 kHz, which is beyond the human hearing range. The high frequencies generated by a variety of air and gas leaks, worn bearings, and faulty electrical equipment are electronically translated down to human hearing range by a process called heterodyning. At this point, they can be heard through headphones and viewed as intensity levels on display panels or meters. Wearing headphones and either touching the instrument to a test spot or pointing it at a target, technicians can hear ultrasonic sounds and determine their sources. The shortwave characteristic of ultrasound provides three major advantages.

  • The source of the ultrasonic sound can be identified with little interference from competing sounds.
  • The applications for ultrasound are numerous; they cover most potential mechanical, electrical and leak problems.
  • Potential failure conditions can be detected, trended and analyzed earlier than is possible with traditional PdM technologies.

Infrared thermography.
Infrared thermal imaging measures infrared radiation that all objects warmer than absolute zero give off in proportion to their temperatures. This type of radiation is invisible to the human eye. Thermography provides images depicting the relative temperature differences of the objects under scrutiny. One can view these ÒpicturesÓ for anomalies, either hot or cold, to locate problems and to determine appropriate corrective actions.

To prevent costly mistakes, it is important to be trained in the safe and proper operation of these instruments. For proper inspection, both technologies normally require the equipment under inspection to be running and under load. The advantage to this is that there is no need to shut down operations or disrupt valuable production schedules to use either type of instrument.

When frequently and properly used, airborne ultrasound and infrared individually offer a quick return on investment (ROI). Combined use accelerates the ROI. The following examples illustrate the benefits of using these technologies in tandem.

Electrical disturbances
Electrical utilities cannot afford power outages. Yet, one such utility in Minnesota had not conducted an infrared inspection in nearly seven years and had never performed an airborne ultrasound inspection.

While conducting an infrared survey for this utility, I noticed a hotspot on an overhead disconnect, indicating a failing connection. Using airborne ultrasound on the same connection, there did not appear to be a problem. When the utility checked the disconnect, everything appeared to be normal until a lineman re-energized it. At that point, the connection began arcing. Locating this problem allowed the utility to repair it promptly, thereby preventing a future unscheduled power outage. In this case, the outage would have affected customers in three towns, over several miles of power lines.

In another situation with this utility, on an extremely humid day, I discovered arcing on an insulator between a power line and a power pole. The ultrasonic emissions from this insulator indicated a crack. Infrared, however, did not detect the problem. Without intervention, the cracked insulator would have eventually caused a power outage.

Although many electrical problems will be revealed by both technologies used individually, there are applications where one will detect a problem and the other will not. Use of both instruments optimizes detection of problems.

Electrical problems usually show up as either temperature changes, which infrared detects, or as electrical emissions, producing ultrasonic frequencies, which ultrasound instruments can detect. Using the two technologies together provides a more complete electrical survey than either can provide by itself. Each technology has its own advantages. Ultrasound, for instance, can detect corona (while infrared cannot) because corona does not generate heat. Ultrasound detects arcing and tracking at earlier stages than infrared, which detects the build-up of heat resulting from arcing and tracking. On the other hand, since infrared detects heat, it will identify problems that do not generate sound, such as those caused by high resistance. An example of this would be a corroded or improperly crimped splice in a power cable.

Playing it safe
Safety while performing infrared inspections on loaded electrical equipment, including switchgear and bus ways, can be increased greatly with the use of airborne ultrasound, prior to performance of the infrared inspection.

Such an inspection cannot be performed through closed panels or covers. But, by scanning around panel covers, seams, and vent holes of electrical equipment with the airborne ultrasound probe, it is possible to detect arcing, tracking, and corona problems, before opening up live electrical cabinets. This reduces the chances of an arc flash explosion occurring when opening the cabinets. Explosions like this can be deadly, generating metal vapors and temperatures up to 35,000 F. Having advance warning of such problems allows for the equipment to be de-energized. The infrared inspection can be safely performed immediately after de-energization and the necessary repairs can be made.

Detecting freon leaks
Ultrasound and infrared technologies work well together to locate underground leaks. One such application where I used them in combination was in the pinpointing of freon leaks at an indoor ice arena. Freon was the refrigerant used in the rink’s cooling system.

Rink employees could tell by a drop in pressure on the cooling system’s gas gauges that it was losing freon, but they could not find the leak. Freon is not only expensive, it is damaging to the environment when leaked into the atmosphere. If left unrepaired, a freon leak could also destroy the rink floor, causing it to heave. Multiple leaks with severe heaving could require the replacement of the entire floor, at a possible cost of $300,000.

On start-up of the ice plant, I scanned the rink with infrared for abnormally cold areas that might indicate a possible refrigerant leak. (There was only a small window of time for performing this scan, since frost would form on the surface of the rink fairly quickly and mask the cooler areas. This would make the entire surface appear to be the same temperature.)

Normally, in this type of application, there should be a pattern indicating where the refrigerant lines lie. Leaks should appear as large round or oblong spots. During the scan, I marked the abnormally cold areas.

Next, I employed ultrasound, using the ultrasonic instrument with its contact module, and listening for the loudest ultrasonic signal. This pinpointed the source of the leak within inches. Such accuracy was necessary because of the risk of damaging nearby lines during repairs.

While the infrared imager drastically cut down the ultrasound probing time, alone, it would not necessarily have indicated the precise source of the leak, since refrigerant flows away from its source.

Picking out bearing problems
Combining ultrasound and infrared also proved to be an invaluable technique when it came to inspecting bearings at a large distribution warehouse. On this project, I surveyed conveyors incorporating many motors and thousands of roller bearings, most of them in hard-to-reach areas.

An initial infrared scan identified bearings with abnormally high temperatures. Then, utilizing the ultrasonic instrument with its contact module, I listened to determine if the overheated bearings were defective or experiencing a lubrication problem.

If the problem appeared to be under-lubrication, maintenance personnel, aided by the ultrasonic instrument, properly greased the bearing, After a few minutes, I would listen to the same bearing. A recurring signal was an indication that the bearing needed to be replaced.

While ultrasonic signals do appear in bearings before there is a noticeable temperature rise, on this warehouse project, it was not feasible to inspect each bearing using ultrasound. In this case, failure of certain conveyor motors or roller bearings could have shut down the operation at a cost of $2000-$10,000 per hour, for as long as two to three days, while a motor was being rebuilt or while waiting for new components to arrive.

The bottom line
Using a combination of ultrasound and infrared is a way to move PdM to a higher level. In my work, I have found that organizations are more receptive to such activities when they discover the savings that can be achieved.

Whether a company has an overworked maintenance department, limited access to predictive technologies, or time is of the essence, turning to an expert for predictive maintenance needs will help increase equipment reliability and availability, while reducing costs due to unplanned outages. Leveraging ultrasound and infrared technologies together is a value-added technique for ensuring the desired deliverables.

Rick Judnick is president of Thermography & Ultrasound Diagnostics, Inc. An electrical engineer with experience in electrical maintenance and reliability, he is Level II ASNT certified in both thermography and airborne ultrasound. Telephone (218) 827-2297; e-mail rjudnick@t-u-diagnostics.com.

Continue Reading →

184

6:00 am
September 1, 2005
Print Friendly

Continuous Improvement

tom_madding

Tom Madding, Publisher

Someone once said that “the biggest room in the world is the room for improvement.” We at Maintenance Technology agree—just as we bet most of our readers would. To continuously improve our operations, we all must make changes in our organizations and processes, on a regular basis. In this regard, a publishing house is no different than a manufacturing plant or facility operation. No magazine, no matter how successful it is, can afford to rest on its laurels. In our business, responding to the rapidly changing needs and concerns of readers is paramount. The ability to “turn on a dime” (and how quickly you make that turn) is what keeps a publication up and running—and it’s what separates a publication from the rest of the pack.

As part of our continuous improvement program and our mission to better serve our readers, Maintenance Technology has recently made some significant changes.

Effective with this issue, two new individuals are at our editorial helm. Terry Wireman has assumed the duties of Editorial Director for both Maintenance Technology and Lubrication & Fluid Power magazines. Jane Alexander has joined Terry as Managing Editor for both publications. Let there be no misunderstanding. While these moves represent a change in personnel, there is no change in our commitment to keeping Maintenance Technology the premier magazine for plant equipment reliability, maintenance, and asset management.

Going forward, we will continue to focus on “Best Practices,” only now we’ll be including more details on how to achieve these goals in your operations. We also will begin covering more organizations that have achieved a “Best Practice” status. That way, readers can learn from successful peers—those individuals who have been able to initiate and sustain changes in their organizations. New and revitalized departments and columns, plus a whole new look and feel to our magazine also will be forthcoming in the next few months.

In the meantime, both Terry and Jane will be attending various industry events, including technical conferences, User Group meetings, and trade exhibitions around the country. Please take the time to visit with them whenever you run into them. They—we—want to hear from all of you, end users and advertisers alike. Furthermore, we are eager to share your stories and messages with others.

These are exciting times for Maintenance Technology. Yet, as we move full-speed ahead in our continuous improvement, we would be remiss in not mentioning Bob Baldwin, founding Editor of this publication, and the many contributions he has made to our industry.

We regret that Bob will not be associated with our editorial team in the future. Over the years, he has worked tirelessly in the development and support of many maintenance-related organizations, including: SMRP, MIMOSA and the Maintenance Excellence Roundtable, to name but a few. Best maintenance practices truly are Bob’s passion, and we’ll miss him greatly. All of us at Maintenance Technology wish Bob the very best in his future endeavors. MT

Continue Reading →

206

6:00 am
September 1, 2005
Print Friendly

Managing Production, Facility, and Fleet Assets

Big operations, spread across a big state, demanded a big CMMS solution.

The Lower Colorado River Authority (LCRA) is the largest publicly owned supplier of renewable energy in the State of Texas. It delivers electricity for Central Texas, develops water and wastewater utilities, and manages the water supply and environment in the lower Colorado basin. It also provides public recreation areas and supports community and economic development activities in 58 Texas counties.

To be specific, LCRA sells wholesale electricity to 40+ retail utilities (including cities and electric cooperatives) that serve more than one million people. It operates over 3300 miles of transmission lines, manages water supplies along a 600-mile stretch of the Texas Colorado River, and runs six hydroelectric dams. In addition, the organization owns approximately 16,000 acres of recreational lands along the Highland Lakes and Colorado River.

Generating annual revenues in excess of $600 million, LCRA counts on its 2100 employees to help provide reliable low-cost utilities and high-quality public services. This is a big operation with big maintenance management needs.

Organizationally, LCRA divides itself into five business unitsÐwholesale power; transmission and distribution; water and wastewater; community services; and business services groups. Over time, it recognized the need for a comprehensive software tool that could manage the diverse assets of each business unit, including all production, facility, and fleet assets. Important requirements included helping LCRA comply with government regulations and the ability to integrate with the organization’s key financial software, PeopleSoft. Finally, the work force needed the solution to be a seamless, mobile one.

After considering these requirements, LCRA determined that upgrading its existing system would be more costly than implementing a new and better solution. Following extensive review , the organization selected Maximo, MRO Software’s strategic asset-management solution, to implement throughout all five of its business units.

Seamless Integration

It was important for LCRA’s chosen asset-management system to integrate to other critical business systems within the organization. It selected the Maximo Integration Gateway to integrate Maximo to its PeopleSoft purchasing system.

The purchase requisition starts in Maximo, where purchase order information resides and inventory information is updated. The order is then sent to the PeopleSoft system for approval. As a result, LCRA now has better visibility than before into what is being spent.

LCRA also integrated Maximo to its scheduling solution, Primavera Enterprise (P3e). The implementation team integrated the solutions to reduce errors from manual entry, eliminate duplicate entry, and also decrease the time spent on creating schedules. Such integration allows LCRA to combine the work management capabilities of Maximo with the scheduling capabilities in P3e.

Representative benefits
LCRA’s wholesale power group consists of two natural gas plants, one coal-fired plant, and 300,000 assets. Because the group lacked an entire view of its inventory, unnecessary parts and equipment would be ordered. With Maximo, this business unit now can take advantage of inventory-sharing, thereby reducing on-hand quantities. Since it can more easily determine obsolete items, the group has better control of its inventory. Today, more aware of its reorder points, it relies on its CMMS solution to automate the procurement process.

Like other LCRA business units, the wholesale power group uses filters inherent in the software solution’s functionality to view the quantities of items in its storerooms. Prior to Maximo’s implementation, the process was very time-consuming, with employees having to count items in bins, and manually enter purchase requisitions into the system. Now, they generate automatic requisitions based on set reorder points. This functionality has allowed the group to reallocate five employees to other mission-critical activities, resulting in conservatively-estimated, over-all annual salary savings of $250,000.

The water and wastewater group has set up all of its job plans in Maximo, and now tracks costs by operation and job plan. This group also has created specific work order types. For example, the subject of environment has been added as a work order type.

LCRA can further define and prioritize the type of work to be completed. Emergency work orders are flagged, and equipment is associated by work- order priority, making the entire process much more manageable.

Compliance is another vitally important issue for LCRA. It must comply with rules and regulations set by the Federal Energy Regulatory Commission (FERC). Scrubbers, or emission control devices, have to be placed on units in accordance to the year in which the units were built. Nothing can fall through the cracksÐfailure to comply would result in hefty fines and negative publicity. LCRA now depends on Maximo to track, permit, and generate a report documenting that the necessary project permits have been obtained.

Fleet assets present a significant management challenge for any organization and LCRA is no different. While the organization did not want to purchase a new transportation asset-management system, it knew that a change was sorely needed. With a fleet that encompasses more than 2000 vehicles, including bucket and pickup trucks, sedans, and dozers, managers had clear goals in mind. They wanted to move the operation from a mainframe system, be able to track warranty information, allow chargebacks to various departments—and reduce costs.

Today, all of LCRA’s mobile equipment is being tracked in Maximo, making the fleet operation easier to manage. With this software suite, the organization has set up vehicles as tools and is charging each business unit for its vehicle use. Mileage rates and depreciation on vehicles also are accounted for. If, for instance, a maintenance bulldozer is used at a coal-fired plant or a car is used to go to a particular site, the number of miles is determined, and the unit is charged. Rates vary depending on the type of vehicle. Moreover, these fleet assets are now tracked and managed throughout their entire life cycle.

Mobile features enhance maintenance and accountability
LCRA knew whatever software solution it chose also needed to be a mobile one. It selected the Maximo Mobile Suite to reduce the amount of reactive work and to eliminate preventive maintenance backlogs. This version allows employees to use mobile computers to access asset, work, and parts information at the point of- performance.

Crews in the transmission and distribution group used to go out for a week at a time with a stack of printed work orders. Now they use 40 mobile devices and simply download information. If crew members see equipment that needs to be fixed, they can immediately create a work order and attend to the problem while in the field. They also can view the history on the asset so they know what has been done in the past. According to LCRA, duplicate data entry has been reduced and data integrity at the point-of-performance has been enhanced.

The organization also is making use of Maximo Mobile Auditor. This feature maintains accountability for critical assets and collects and enhances vital asset information. Employees inventory equipment and verify that data is entered into the system on an ongoing basis. This checks-and-balances approach keeps LCRA on top of pertinent information regarding equipment location and conditionÐand gives it far better control over its assets.

Strategic impact
LCRA takes a holistic view of its business, including its costs. Its move to Maximo has resulted in a strategic impact on the business. In short, the solution has enabled LCRA to reduce downtime and streamline operations. Being able to make better and more informed business decisions regarding critical assets is leading to savings in both time and money. Within LCRA, it is expected that this software solution will be driving process improvements and providing benefits to the organization for a long, long time.

Clayton Cook has been employed with LCRA for 25 years and CMMS Manager for over 10 of those years. He serves as the system owner and functional manager of the Maximo Asset Management system at LCRA, He also leads the LCRA Maximo Subject Matter Expert (SME) Team and acts as a liaison between the end users, the SME Team and the LCRA Technology (Applications Support) Team. E-mail: clay.cook@lcra.org. Industry veteran, Ron Wallace, is Director of Utility Industry Marketing, and longtime employee of MRO Software. E-mail: ron.wallace@mro.com.

Continue Reading →

367

6:00 am
September 1, 2005
Print Friendly

Do You Have An Effective Lube Oil Analysis Program?

In-depth oil analysis is essential to the health and reliability of production machinery.

Oil analysis may be one of the last frontiers of industrial maintenance, where large amounts of money can be saved for a relatively small investment. By reducing failure-related costs, it is not beyond the realm of possibility to expect a return on investment in excess of 500 percent. Savings of more than $1 million are possible, depending on the size of the plant.

Not all lube oil analysis programs are equal. Just because a plant engages in some form of oil analysis does not mean that the machinery is well protected or that the program is effective.

It is important to note that oil analysis is only one part of a comprehensive tribology program that includes vibration monitoring and analysis, ultrasonics, and thermography. Oil analysis supplements vibration analysis by revealing two key root causes of machinery failure: changes in oil chemistry and oil contamination.

Evaluate the current situation
Here are three questions to ask:

  • How many samples are being collected and tested?
  • What tests are being done regularly?
  • What cost savings have been documented in the past 12 months?

Sampling. Collecting as few as 10 oil samples per quarter is considered adequate in some plants. This is probably not enough, however. On the other hand, some intensive programs involve gathering more than 1000 samples per month. In general, collecting fewer than fifty samples per month is an indication of an incomplete oil analysis program in most plants.

Samples should be collected and tested often enough to detect contamination and chemistry problems and establish trends. If a seal failure could allow contamination leading to damage in three months, then monthly samples will be necessary to identify a problem early enough so that steps can be taken to repair the seal.

Because every plant is unique, there is no single answer to questions regarding the number of oil samples to be gathered or collection frequency. On average, most industrial mills or plants can expect excellent cost savings based on information gained by collecting and analyzing between 50 and 200 samples each month. Here is one rule of thumb: if there are 3000 vibration points in the oil-lubricated pumps, motors, compressors, turbines, gearboxes, air handlers, and hydraulic systems in a plant, at least 100 oil points should be sampled monthly.

Testing. The going price for industrial oil analysis is about $32 per sample, but may be as low as $8 each. Is a single sample worth $32? What will it cost to repair a damaged machine? A greater investment is probably economical protection for millions of dollars worth of production equipment.

Industrial machinery is subject to contamination and chemistry-related faults leading to abnormal wear mechanisms typically involving abrasion, fatigue, adhesion, and corrosion. A thorough oil analysis for industrial applications seeks to identify lubricant components that support wear due to abrasion, adhesion, and corrosion. Typical tests include spectrometric oil analysis, total acid number, water by Karl Fischer, particle count with size distribution, and automatic and analytical wear debris analysis (WDA).

Too many industrial oil samples are subjected to low-cost analysis when they really need the particle counting, particle size distribution, and wear debris analysis that come only with a more expensive program. Some maintenance departments choose low-cost tests because they do not understand the value of looking for particles larger than 10 microns. Purchasing agents may insist on lower cost analysis. Or, the oil supplier may give away oil analysis as a value-added service.

Savings. If substantial cost savings cannot be attributed to oil analysis, serious changes to the current program should be considered because it is not producing results that are attainable. Successful oil analysis programs do pay off. A saving of $250,000 in the first year is not unusual, as an effective oil analysis program will identify potential problems that can be corrected by appropriate maintenance actions.

If no follow-through is called for, the program is a waste of time and money. It would be better to do no oil analysis than to have a program with no maintenance follow-up. At least management will not be lulled into a false sense of security by thinking that plant assets are being fully protected. Establish a strong program
The operational life of most industrial equipment is directly related to the contamination and chemistry of the lubricants, which are root causes of abrasion, adhesion, and corrosion. When a program is established to recognize presence of contaminants and to identify the types and sizes of particles present in a lubrication system, a giant step has been taken toward predicting if and when a ma-chine will fail in order to initiate corrective measures.

Is it better to do testing and analysis on site or to rely on a well-equipped off-site laboratory to test samples collected in the plant? There are a number of good arguments for doing the work on site, including better control, immediate results and immediate retest if needed, analysis by technicians that are familiar with the equipment, and the ability to test more lubricants more often.

In general, on-site oil analysis makes sense for large industrial plants with more than 100 oil systems. An effective on-site program monitors machine wear, system contamination, and oil chemistry. Emphasis must be placed on the identification of the primary root causes of abrasive wear, fatigue wear, adhesive wear, and corrosive wear.

Considering the wide range of equipment in an industrial plant and the number of faults to be monitored, an on-site program must have a range of capabilities, including both quantitative and qualitative wear debris analysis, particle counting, water contamination monitoring and oil chemistry testing.

The key to the success of an on-site program is a well-trained, in-house champion with a vision for improvement.

No matter who does the testing and analysis, successful oil analysis programs generally encompass:

  • Automatic wear debris analysis providing a quantitative measure of ferrous and nonferrous metal particles in an oil sample
  • Analytical wear debris analysis (e.g., the viewing and classifying of wear debris under a microscope)
  • Particle counting with size distribution
  • Water contamination
  • Oil chemistry and viscosity
  • Expert interpretation
  • Electronic reporting

Wear debris analysis (WDA). WDA measures the nature and severity of wear mechanisms quantitatively and qualitatively. An automatic wear debris analyzer or ferrous density monitor not only measures particle size, it screens out the relatively few samples requiring in-depth visual analysis. Qualitative analysis is performed by a trained technician who uses a microscope to view both ferrous and nonferrous wear debris on a glass slide or filter patch.

In many cases, this step produces the most useful information of all, including the concentration, shape, size, texture, color, and optical properties of the particles. A trained technician can determine types and causes of wear and contamination (abrasion, adhesion, fatigue or corrosion) quite accurately using this technique.

Abrasive wear particles normally are an indication of excessive dirt or other hard par-ticles that are cutting away at load-bearing surfaces. Adhesive wear particles will reveal problems with lu-bricant starvation that results from either low or high load, high temperature, slow speed, or inadequate lubricant delivery. Fatigue wear par-ticles may be associated with mechanical problems, such as improper fit, misalignment, imbalance or some other condition. Corrosive wear particles indicate the presence of corrosive fluids, such as water or process materials contacting metal surfaces.

This knowledge, which reveals the condition of a piece of equipment when the sample was taken, is useful in predicting when corrective action will be needed and what must be done.

Particle counting with size distribution. Water and dust, the most common contaminants in oil, are primary causes of abrasion, corrosion and fatigue wear. Effective oil analysis programs quantify both water and dust.

Particle counting is the accepted method for measuring total concentration of particulate debris, as well as size distribution. Both are important for monitoring the condition of the lubricant and effectiveness of the filtration system. Particle counters for on-site oil analysis should actually measure multiple size ranges leading to a determination of size distribution. Both bench-top and portable unitss are available, but bench-top use of a portable particle counter can be cumbersome.

A new ppm distribution method combines particle counting and WDA for maximum impact. Fig. 2 shows parts per million (ppm v/v) of solid particles vs size distribution for those particles. Each peak in the ppm distribution plot represents a different source of contamination or wear debris in the oil. If there are multiple peaks in the distribution, there should be a separate group of debris on a filter patch or glass slide corresponding to each peak in the plot. Each particle group can be attributed to a root cause event associated with contamination or wear events.

Water contamination. There are many ways to measure water in oil. Visual appearance, crackle test, and time-resolved dielectric are three common methods of identifying water contamination problems. The exact measure of water concentration is best left to a laboratory using Karl Fischer titration. Corrective actions depend on whether the water is in solution, emulsion, or free state. In general, emulsified and free water are most damaging.

Oil chemistry. Chemical instability in lubricants is often caused either by ingress of process materials into the fluid or by breakdown of the fluid. Breakdown occurs due to high temperature exposure and/or aeration, possibly due to foaming.

Another serious form of chemical instability is the result of water or coolant contamination. These corrosive fluids not only attack metal surfaces, they also consume vital additives that are needed for anti-oxidation, anti-wear and other functions in the fluid.

Chemistry monitoring normally involves comparison of a used oil sample with new oil. Visual examination can reveal color changes from amber to reddish-brown, indicating chemical deterioration. Quantitative on-site methods for measuring oil chemistry include dielectric, voltammetric, and TAN test kit. Dielectric increases 0.1 to 0.02 and TAN increases 1.0 to 2.0 each represent significant chemical deterioration of lubricating oil.

Staffing and training
A basic understanding of the importance of oil analysis is imperative for all maintenance personnel. The single most important ingredient in a successful oil analysis program, however, is the champion behind it. One individual must be assigned to take the lead, and that person must be passionate about the opportunity he or she has been given to save the company money.

Skill training and certification are essential. Many equipment vendors offer training and certification for their instruments and methods. In addition, general tribology training is available from various sources. The Society for Tribologists and Lubrication Engineers (STLE) provides training standards such as Certified Lubrication Specialist and Oil Monitoring Analyst. The standards are high, and the exams are not easy. Formal training is crucial. Which department should perform on-site oil analysis? The reliability team in the maintenance department is the first choice. A good alternative is the technical services department, where laboratory analysis for environment and process monitoring already take place. A third possibility is an outside contractor to collect the samples and perform oil analysis on site.

In any of these scenarios, the findings must impact equipment maintenance. Oil analysis without corresponding corrective actions will not be effective.

Measuring results
Periodic auditing is suggested for best practice. Each audit should compare this plant with plants identified as industry benchmark (e.g., those setting the highest standards for oil analysis practices). The audit report should include an assessment of performance and cite ways to improve.

After the initial audit, a continuous improvement plan should be drafted. The plan should set objectives for the next 24 months. Then, quarterly reviews should measure progress. Each quarterly review should be summarized in a status report to the maintenance and plant managers. All reports must include available financial evidence of savings.

Benefits outweigh costs
Many industrial plants are faced with downsizing and out-sourcing for maintenance activities. Some predictive maintenance teams that formerly comprised four to six people have been cut in half. How can these plants possibly increase the number of oil samples collected from 10 per month to more than 100 per month? Moreover, how can they begin doing the oil analysis?

The plant collecting 10 or 20 samples per month is missing problems costing far more in labor and other expenses than the cost of collecting more samples. It takes only about one week per month to collect and test 100 samples. The payoff in both labor and cost savings is far greater than the time spent doing this work.

Obviously, new programs must be justified, but history provides dozens of documented case histories with anecdotal evidence backed by the knowledge of tribology experts.

Example Oil Monitoring Program

Parameters What is Measured? Significance
Wear Parameters
Ferrous index Iron particles >5 microns Recent abnormal wear
Large ferrous indication Iron particles >60 microns Abrasive wear indication
Large nonferrous indication Other metals >60 microns Abrasive wear indication
Analytical wear debris analysis Microscopic particle examination Wear severity and root cause
Contamination Parameters
Particle count ISO counts (8 sizes) Dust, wear, and process contamination
PPM distribution (3 ranges)
System debris (ml)
Contaminant index Nonferrous contaminants Corrosive fluid contamination
Water contamination Water or other corrosive fluid Corrosive fluid contamination
Free water droplet indication Immiscible fluid droplets in oil Corrosion and poor lubrication
Chemistry Parameters
Chemical index Deteriorated lubricant Lubricant no longer fit for use
Dielectric permittivity Physical property of lubricant Wrong oil or degraded oil
Viscosity ISO viscosity grade Wrong oil or dilution with fuel

Ray Garvey is tribology solutions manager for Emerson Process Management CSI. Telephone (865) 675-2110;e-mail ray.garvey@emersonprocess.com; Internet www.mhm.assetweb.com. His certifications include PE, CLS and OMA1.

Continue Reading →

165

6:00 am
September 1, 2005
Print Friendly

Somewhere In Time

“The past is the future, the future is the past: it all gives me a headache.”—At least that’s the word from one popular television show. Consider how this applies to maintenance and reliability.

In reality, how many “new” processes and procedures actually have been created in the area of maintenance and reliability in the last few years? Are the latest “buzzwords” simply new names for processes and procedures that have existed for decades? Preventive Maintenance, Reliability Centered Maintenance, Life Cycle Costing, etc.—many of these types of processes can be traced to the 1960s—maybe earlier. Consequently, the practices and processes that many companies are now planning to implement in the future actually existed in the past.

Even Computerized Maintenance Management Systems (CMMS), which provided the foundation for many current Enterprise Asset Management (EAM) systems, were being implemented in the mid-1970s. While those early systems may not have utilized all of the technologies to enhance the “user-friendliness” inherent in today’s systems, they still supported maintenance business processes.

If we focus just on the CMMS/EAM aspect of the maintenance and reliability market, what do surveys tell us about the implementation and utilization of our current systems?

Most statistics show that today’s CMMS/EAM systems are not being implemented properly; that they are not being utilized once they are implemented; and, that they are not delivering the returns they were projected to achieve. There are many common reasons why, including:

  • Lack of management support for and understanding of the CMMS/ EAM project
  • Lack of organizational business processes to properly utilize the CMMS/EAM system
  • Insufficient implementation resources
  • Insufficient personnel to utilize the system

We’ve been watching CMMS/EAM systems fail for these and related reasons for years. Why, then, would we let another implementation fail? Can’t we learn from the past?

Do companies believe their implementations are so unique that they can’t learn from the successes and failures of others? Would this type of shared information not help a company optimize its investment in a CMMS/ EAM system?

In Maintenance Technology and at our MARTS conferences, CMMS/EAM system implementation is an especially hot topic, and it has been covered from almost every possible angle. Yet, despite overwhelming interest, why has the percentage of perceived successful implementations still hovered at 50% or less?

How much money are we wasting by making the same mistakes time after time—year after year? Since expense dollars not spent become profit dollars, what we really should ask ourselves is: “How much of our company’s profit are we wasting by repeating historical mistakes?”

With so many educational resources available today, there’s no excuse for repeating historical mistakes in the selection, implementation and utilization of CMMS/EAM systems. If nothing else, we could find ways to make new mistakes. (This holds true for other types of maintenance and reliability processes, too.)

So, where does your company stand in its current CMMS/EAM efforts? In the past or in the future? To paraphrase that television show, just thinking about the question could be enough to bring on a headache! As for me, I’m off to find some Excedrin—but, I’ll be back next month.

Continue Reading →

2075

6:00 am
September 1, 2005
Print Friendly

Cost Budgeting and Control for Maintenance

Proposals that significantly reduce the largest sectors of a plantÕs total cost structure usually are the most appealing to management. This article introduces an opportunity that passes the test.

One of the most fundamental requirements of business operations is the ability to budget and control cost. This is especially so for the big-ticket functions in a plant’s total cost.

Maintenance, in many operations, is a cost well in excess of a million dollars every month. The complex nature of this function, however, typically has prevented practices from taking form that would meet basic standards for cost budgeting and control. By bringing accounting, internal audit, and database mining skills to the problem, the ability to budget and control maintenance cost can finally match the complexity of the maintenance function.

Building the system to do so is one of a plant’s most attractive alternatives for increasing its total profitability. In the process, a plant quantifies the gap in cost defined as “what is the total cost and why, and what should it be?”

The process then refines and expands the plant’s previously installed best practices as is necessary to close the cost gap. Closing this gap is accomplished through a method known in accounting as “activity-based costing.”

Profit potential
Based on our experieince, the ability to budget and control maintenance cost can increase income by 10 to 30 percent during strong business cycles, and over 100 percent during weak cycles. It may be the difference between profit and loss in the worst business cycles.

A plant is not required to take a leap of faith, however. The value is naturally calculated early on, as an operation’s cost budget and control system is being built.

It is easy to estimate the profit possibility for any plant. Industry benchmarking has found that total maintenance cost can be reduced by 10 to 35 percent.

The best acid test of potential, though, would be to assume a low range to see if the profit increase would still be significant.

First, multiply plant total maintenance by the selected range that maintenance cost may be reduced. Since a dollar reduced goes to the bottom line, the second step is to add the result of the first step to plant profit associated with various business cycles and convert it to a percent increase. The increase for plant return on capital employed will be approximately the same.

Methodology explained
Figure 1 shows that a cost is the combination of an activity performed for a business object and the resources consumed by the activity. Resources are everything for which a dollar is spent, as captured in the plant general ledger.

Cost budgeting determines what the activities should be for a business object, such as a plant area or product and the resources to execute them. Through each business object, the activities are tied to plant performance. Cost control assures that the activities were performed as planned and with respective resources.

Budgeting is preceded by three things.

  • First is understanding the what, why and how for competing and profiting.
  • Second is establishing the business objects for which activity-based cost must be known, budgeted, and controlled, because they are the focus of plant performance and decision-making. Examples include product, market, customers, department, asset, and crew.
  • Third is identifying the plant’s unique core performance and cost problems with respect to reliability and maintenance.

Building the budget allows management to understand why the cost to compete and profit is what it is with respect to business objects—and what they should be, instead. It also allows the plant to distinguish between the resources needed for the activities and the capacity and cost of the plant’s existing resource case. It follows that such a business-object-focused understanding allows management to ask and answer the questions it needs for all types of decision-making.

Control begins by routinely measuring, investigating, and acting on variances to what the costs should be, thus allowing a plant to graduate from external benchmarks to its own unique set. In the words of one plant manager, control can best be summed up as: “I don’t want to just see the numbers, I want know what you are doing about them.”

Acting on the answer to this type of request causes a plant’s best practices to deliver competitiveness and profitability.

Assurance procedures
Building an activity-based budgeting and control system is a wasted investment if a plant cannot act effectively and efficiently to reduce and hold the line on targeted costs. The key is the connection between the system and the operation’s existing best practices. The connection is assurance procedures.

Assurance procedures underlie the budget, variance and control actions. They are refinements and expansions to maintenance best practices (and others) that are already in place. These procedures are revealed and implemented as the system is designed, built and operated. Their purpose is to confirm performance, control cost and generate control-quality data.

Assurance procedures accept the plant’s best practices as they are now. They make only the improvements needed to tap into their im-plications for re-ducing cost and assuring workload execution. Thus, list of improvements will be short—and most will be a small matter to implement.

Building it
Building an activity-based cost budgeting and control system is a three-step dance.

  • The first step is to understand the plant in depth as a business. This reveals how the plant works, competes and profits, as well as what plant-wide practices it has built over time to do so. In turn, this will reveal the core types of decision-making and cost and performance problems that must be dealt with through the system. In such a context, the plant’s total costs will be identified through the general ledger of accounts, and its software systems will be mapped to size up the availability and quality of data in the plant.
  • The second step is to set the structure for the cost system, based on the facts gathered by the first step. This step defines the structure of business objects, activities, and resources on which the budget and variance documents will be designed. Hand in hand, the necessary assurance procedures are defined.
  • The third step is to build the system. This is done in a manner akin to building the next several rungs on a ladder to be climbed. For example, the most common initial obstacle to activity-based cost management is the control-quality of data. Of course, much of the important data resides in the CMMS database. It is now unusual to find plants without a CMMS. Initially, however, the data often is not of sufficient quality for cost management.

If the data is weak, the first action is to immediately determine and implement assurance procedures for quality controls at its source. This quickly accumulates data with which to build the plant’s ability to form a detailed ex-planation of what is happening as the present month is unfolding and to present a full picture of the past months. After several months, the data will have reached a point of statistical mass on which the plant can begin to build its ability to budget cost and subsequently conduct monthly variance analysis.

As the budget and control system is built, the plant will concurrently act to reduce total cost, while assuring workload performance. Meanwhile, management’s discussions and decisions around maintenance change radically, because they can.

A new best practice
Cost budgeting and control for maintenance is a new “best practice” for plants. Think of this methodology as a sturdy platform from which other long-established maintenance best practices are molded. It is an effective way to help them deliver their potential for your plant’s competitiveness and profitability.

Richard Lamb, P.E., CPA, is president of Cost Control Systems, LLC, of Houston, TX, a firm dedicated to improving business annual reports by knowing, budgeting, reducing, and controlling maintenance cost. He is author of the book, Availability Engineering and Management for Manufacturing Plant Performance. Telephone (713) 777-9492; e-mail >rlamb@cost-controls.com; Internet www.cost-controls.com

Continue Reading →

436

6:00 am
September 1, 2005
Print Friendly

Guidelines to Address Arc-Flash

You have decided to conform to the requirements of NFPA 70E, the standard for electrical safety in the workplace. You already have an electrical program for preventing shock; here is an explanation of how to address the 70E requirements for arc-flash.

Flash hazard analysis
On the subject of arc-flash, 70E requires a flash hazard analysis. Although itdoes not explain how to conduct an analysis, it does say the analysis shall determine a “flash protection boundary” and the personal protective equipment (PPE) requirements when working within that boundary.

NFPA 70E formula to calculate arc-flash boundaries

Dc = (2.65 x MVAbf x t)1/2
where
Dc = distance in feet from an arc source for a second-degree burn
MVAbf = bolted fault capacity in mega volt-amperes available at the point involved—a function of available short circuit current
t = time in seconds of arc exposure

Flash protection boundary
Arc-flash boundaries are required around electrical equipment such as switchboards, panelboards, industrial control panels, motor control centers, and similar equipment, when an individual works on or in the proximity of “exposed energized” (energized and not enclosed, shielded, covered, or otherwise protected from contact) components. This includes conducting activities such as examination, adjustment, servicing, maintenance, or troubleshooting. Equipment energized below 240 V need not be considered, unless it is fed by a 112.5 kVA transformer or larger.

The arc-flash boundary is a distance at which a person working any closer at the time of an arc-flash may receive permanent injury (the onset of a second-degree burn or worse) if not properly protected by flame-resistant (FR) clothing. Research has shown that permanent injury results from an arc-flash that causes an incident energy of 1.2 calories/cm2 (cal/cm2) or greater at the skin’s surface.

This distance (boundary) cannot be determined by a casual survey of electrical equipment. The only practical way of determining this boundary is to calculate the magnitude of the arc (a function of the available short circuit current), estimate how long the arc will last (a function of the interrupting time of the fuse or circuit breaker), and then calculate how far away an individual must be to avoid receiving an incident energy of 1.2 cal/cm2.

Small facilities
In small facilities, such as small businesses and offices that use only 240 V and less and have minor power requirements (primarily lighting and receptacle loads), it may not be practical or economical to calculate arc-flash boundaries. It appears the authors of 70E realized this, as they established a default flash boundary that can be used without calculations. The default boundary extends four feet from the energized, exposed components. Any time individuals are inside this boundary, they must wear proper PPE to avoid a permanent injury in the event of an arc-flash.

In most small facilities, the four-foot boundary is likely overly restrictive, making it probable individuals will attempt to avoid use of the PPE, potentially resulting in enforcement issues. In a few cases the opposite may be true; the four-foot boundary may be inadequate to avoid injury due to high incident energy.

70E addresses this limitation in a footnote, qualifying that the four-foot boundary is applicable only where the available short circuit current does not exceed 50,000 A and the clearing time of the fuse or circuit breaker does not exceed 0.1 sec, or any combination not exceeding 5000 A sec. This footnote seems to place small facilities back into the position of collecting data and calculating short circuit current and clearing times to justify using the four-foot boundary. However, in the vast majority of small facilities, if the electrical system were properly designed and if it has been properly maintained by competent electricians (always installing properly sized fuses and circuit breakers), the four-foot boundary should be more than adequate to avoid any permanent injury from an arc-flash.

Selecting PPE When using 4-ft default boundary

Example of NFPA 70E Hazard/Risk Category Classifications by Task

600V equipment other than MCCs, panelboards, and switchgear

Task Category
Work on energized parts, including voltage testing 2*

(2* means a double-layer switching hood, hearing protection, hard hat, leather gloves, and leather work shoes are required in addition to the Category 2 clothing requirements.)
(If the available short circuit current is less than 10,000 amperes the hazard/risk category can be reduced by one category level.)

Fig. 1. The first table matches a “hazard/risk category” to a specific task by voltage level and type of equipment.

Other facilities
For other facilities, especially those having employees, contractors, or service personnel who perform functions exposing them to energized components, the four-foot default boundary is probably not practical or appropriate. The experience of this author indicates that a substantial percentage of the equipment operating at 480 V and less in most facilities will have an arc-flash boundary of less than 12 in., which means FR clothing for the face/chest area is not required when working on or near that equipment.

However, experience also has shown that practically every large facility has some equipment where even the four-foot default boundary is not adequate to avoid permanent injury in the event of an arc-flash. Consequently, 70E provides an alternative: a formula (based on IEEE Standard 1584) to be used under engineering supervision, when the limitation of 5000 A sec is exceeded or when realistic flash boundaries are desired. (See accompanying text “NFPA 70E Formula to Calculate Arc-flash Boundaries.”) To use the formula requires knowledge of available short circuit current and corresponding clearing time.

Available short circuit current
Determination of short circuit current starts with the electric utility providing information about its delivery capability at the service entrance/meter point. Receiving this data from the utility can be as easy as a phone call or as difficult as pulling teeth.

Proceeding from the service entrance to the equipment to be worked on, the length, size, and type of every conductor and the nameplate information of every transformer in that path must be recorded. With this recorded data and the right software, a reasonable estimate of the available short circuit current can be calculated for use in the flash boundary formula.

Clearing time
Determination of the arc-flash clearing time at equipment requires collection of data on every fuse and circuit breaker in the circuit between the utility service and the equipment, where the flash boundary is to be determined. Time vs current interrupting information is then acquired from the protective device manufacturer based on the data collected.

Using the short circuit current previously determined and time-current data from the protective device manufacturer, a reasonable estimate of the time required to interrupt the arc-flash can be determined for use in calculating arc-flash boundary.

Formula, software, consultants
For facilities having only a few circuits to be evaluated, using the 70E formula to determine the arc-flash boundary may be feasible. If many circuits are involved, however, commercially available software or a consultant should be considered.

Some commercially-available software performs all the calculations required, including determination of available short circuit current, fault clearing time, and arc-flash boundary. The cost of this software can exceed $10,000, and it should be used under engineering supervision. Keep in mind that data collection is still required for input into the software program.

NFPA 70E establishes five “hazard/risk categories” of FR clothing based on ability to limit the injury to a curable burn at different levels of incident energy:

Category 0 acceptable for incident energy exposure of 0Ð2 cal/cm2
Category 1 2Ð4 cal/cm2
Category 2 4Ð8 cal/cm2
Category 3 8Ð25 cal/cm2
Category 4 25Ð40 cal/cm2

Fig. 2. The second table describes the FR clothing and corresponding incident energy for each of five hazard/risk categories.

PPE selection
NFPA 70E requires that the employer provide and the employee wear appropriate FR clothing and other PPE when within the arc-flash boundary. Selection of FR clothing is based on the level of incident energy the individual will be exposed to in the event of an arc-flash. The level of incident energy is a function of the distance the individual is from the arc-flash (incident energy increases rapidly as the individual moves closer to the arc-flash). Generally, 18 inches is assumed to be the distance between a worker’s face/chest and the arc-flash.

Using the same information as was used to determine the arc-flash boundary, the engineer can calculate the incident energy in cal/cm2 at 18 inches. Since FR clothing is rated in cal/cm2, this allows selection of appropriate clothing to protect against the incident energy of exposure.

It is not uncommon for calculated results at 18 inches to show an arc-flash incident energy of less than 1.2 cal/cm2, resulting in no FR clothing requirement for the face/chest area, only clothing that will not melt, such as cotton. However, additional PPE may be required for parts of the body that are closer than the 18-inch basis. It is also not uncommon to find at least one location in facilities where the calculated incident energy at 18 inches exceeds 40 cal/cm2, the highest level that 70E recognizes as being practical to protect (some clothing manufacturers offer clothing with higher ratings).

Small facilities that choose to use the four-foot default boundary in lieu of using the formula will not have the incident energy results necessary to select the proper level of PPE for the arc-flash hazard. For these facilities, 70E provides two tables to use in selecting PPE. The first table matches a “hazard/risk category” to a specific task by voltage level and type of equipment (Fig. 1). The second table describes the FR clothing and corresponding incident energy for each of five hazard/risk categories (Fig. 2).

Limitations of the NFPA 70E tables
Use of the 70E tables to select PPE has limitations. The first table matching the category to the task is limited to electrical systems that do not exceed specified levels of available short circuit current and fault clearing times as described in the table footnotes. Additionally, 70E states that for tasks not included in the table, and for electrical systems that exceed the footnote limitations, the tables cannot be used, and the incident energy must be calculated for PPE selection.

Using the tables when the electrical system exceeds the levels described may expose individuals to hazardous energies beyond the protection of their FR clothing, potentially resulting in serious injury or death. On the other hand, when the footnotes are met, the level of protection can be overly conservative, which may increase hazards to the individual by limiting vision, mobility, and dexterity. In other words, it is always better to select the proper PPE based on the calculated incident energy of exposure. Selecting PPE based on incident energy also may result in substantial savings over the cost of selecting PPE based on the tables.

Labeling
Although not required by 70E, labeling of equipment is an essential part of the flash hazard analysis. Establishing an arc-flash boundary and determining the appropriate PPE is useless if the information is not communicated to the individuals working on or near the equipment with the hazard. The label should be placed in a conspicuous location that will be seen by individuals before they open equipment.

Since 2002, the National Electrical Code (NEC) has required labeling of equipment to warn of potential flash hazards. Although the current NEC requirement does not specify the information to be provided on the warning label, it is likely that future editions will. This author recommends that, at a minimum,, the following information should be included on the label:

  • Maximum voltage in the equipment
  • Arc-flash boundary
  • Required PPE (hazard/risk category or cal/cm2)

Advantages of qualified consultants
An arc-flash analysis by a qualified consultant should provide more than just results of the analysis. The consultant should review each location with an arc-flash hazard requiring Category 1 FR clothing or greater, to determine if any changes can be made to reduce the hazard. He or she should evaluate changing fuse types or breaker settings and other opportunities to reduce or eliminate the need for FR clothing. Thes recommendations can result in substantial economic savings in FR clothing, and reduction or elimination of arc-flash hazards.

The consultant should provide one-line drawings of the electrical system that has been evaluated, as well as labels for all equipment having the potential of a hazardous arc-flash.

As part of the short-circuit analysis, the consultant should identify any problems in the interrupting capacity of protective devices. Inadequate interrupting capacity can result in the protective device exploding during a major fault, potentially causing injury to personnel and/or costly downtime.

Consultants should make recommendations to improve any overcurrent coordination problems. The objective is for the interrupting device closest to the fault to open first. This minimizes the equipment affected in the event of a fault, improving operations and safety by limiting exposure to electrical hazards when troubleshooting.

Summary
Before purchasing FR clothing and requiring individuals to wear clothing that they may or may not need, complete an arc-flash hazard analysis. Identify the equipment that has the potential to cause permanent injury or death from arc-flash and then evaluate opportunities to eliminate or reduce the hazard, in lieu of using PPE.

After taking advantage of every feasible/realistic opportunity to reduce or eliminate arc-flash hazards, purchase or arrange, through a uniform service, to provide the appropriate PPE. Label equipment with the information necessary for individuals to know the hazard and the required PPE. (This information is also essential for contractors and service personnel who work on or near exposed energized components.)

Train qualified and affected personnel on how to recognize and avoid electrical hazards (shock and arc-flash), and train them on the results of the arc-flash hazard analysis.

John C. Klingler, P.E., is vice president and an instructor for Lewellyn Technology, Inc., Linton, IN. Telephone: (800) 242-6673; e-mail: jklingler@lewellyn.com; Internet: www.lewellyn.com

Continue Reading →

Navigation