Archive | October, 1998

252

3:52 am
October 2, 1998
Print Friendly

An Outstanding Opportunity

bob_baldwinCircle December 1–3 on your calendar. Those are the dates for MAINTECH, a new conference and trade show for the maintenance and reliability community sponsored by Maintenance Technology Magazine. We plan to hold it twice a year, the first one this December in Houston.

Why another conference and trade show? To provide an additional opportunity for you to get the information you need. . .network with the people you need to know. . .and check out current technologies and services. . .to help you be more effective and your company more profitable.

I’m very excited because conferences and trade shows are among my favorite activities. I’ve been to all kinds of events. I’ve been a committeeman, speaker, exhibitor, and attendee. I have been crushed by the crowd, cooked by body heat in small meeting rooms, frozen by air conditioning in auditoriums, and awed and embarrassed by speakers. I have had an opportunity to question the famous and not so famous in press conferences and question and answer sessions. And I’ve enjoyed every minute of it. Perhaps that is why I became a reporter and editor—to have an opportunity to participate in technical conferences and trade shows every month of the year.

Not everyone is as fortunate as I am when it comes to attending conferences. You probably have tighter time constraints and a more restricted travel budget than I do. So you have to choose. mainTech South ’98 will provide another choice (we think the best choice), perhaps a better mix of topics and exhibits for your needs, and possibly be held closer to home.

The program is designed to cover the business and technology of maintenance management. You will be able to choose from 30 sessions presented in five simultaneous tracks: corporate strategy, maintenance and reliability operations, condition assessment technologies, information management technologies, and the human side of managing change. Check out the article on page 32 for more information.

This year’s event is set up to provide a conference with enabling content delivered in seminar, panel, and case study formats by over 100 practitioners and experts. A series of educational workshops has been planned for the day preceding the conference. It has all been designed to support the action-oriented manager.

If you need to work more effectively with financials, technology, information, or people, there is a seat for you in Houston on December 1–3. I hope to see you there. MT

rcb

Continue Reading →

207

3:50 am
October 2, 1998
Print Friendly

E-Mail: The Most Used, Least Effective Communications Tool

Working with many different types of manufacturing facilities seeking improvement programs around North America proves to be most insightful. We have been working with several locations to improve communications about equipment and process reliability and have discovered why some preventive maintenance (PM) programs fail. The answer lies somewhere in the use of e-mail or electronic mail systems as a communications tool. Here is the scenario.

For several years now e-mail has grown rapidly as a communications tool in XYZ Company. At one plant location they are well into e-mail and a two-year planned, preventive, and total productive maintenance implementation process. We were asked to look at the question of “how to improve communications that will result in improved plant reliability and performance.”

Communications methods at this plant typically included large and small meetings, one-on-one discussions, signs, posters, a plant newsletter, and e-mail. The most often cited and used communications method was e-mail, hands down. Everyone we met with spoke of the virtues and the effectiveness of e-mail. The advantages they cited included speed, mass distribution if needed, ease of getting a reply, and the ability to save time by not having to arrange meetings to communicate about specific topics.
Here is the downside of e-mail in this plant location. At first there was little awareness of any limitation. But the closer we got to the people on the plant floor—maintenance, operations, supervision—the more we saw a completely different side on the effectiveness of e-mail. What were the real world findings in this plant? Clearly 70 percent of the employees did not have access to e-mail. This was the plant floor group. Next, even if they did have access to e-mail, approximately 30 percent of the workforce could not read or write above the seventh grade level (the level of basic adult literacy). Of the 70 percent, only a small number of them had computer skills (typically related to games on a home computer).

The answer to this e-mail communications gap? The first-line supervisors were made accountable for reading, and printing out, e-mails that were relevant to their work group and seeing that the messages are communicated to everyone who needs to know. Well, you can imagine how many e-mails are distributed daily at this plant. And, you can imagine how little time the supervisors had to spend reading all those e-mails looking for items that should be communicated to their work groups. Supervisors told us “there has to be a better way!”

To address the supervisors’ concerns, we looked at a number of critical communications that went out via e-mail. We found a number of dysfunctional features. First, beyond the junk e-mail we found that the “subject” line told little of the message’s importance. Second, the opening paragraph did not summarize what the message was but rather began building the reader up to learn more as he or she read on. The text of the message was typically written at the twelfth grade level and higher in very long lines of text and paragraphs. And, the very last line tended to be “Make sure this subject gets communicated to those employees in your area who do not have e-mail access.”

So, what is the bottom line for improving communications in ways that lead to improved plant reliability and performance? First, do not assume that just because you sent an e-mail that you have communicated. The chances are you have not communicated at all to the very people who need to understand the message and take action. Make sure there is a formal communications structure in place to bridge the gap between those who have e-mail access and those who do not. Write e-mails that speak to the readers’ reading and writing levels. Make the subject a specific action statement. Specify who needs to hear this message in the opening paragraph. The lead paragraph should be a very brief summary of the entire e-mail message. Lastly, use short sentences, bullet lists, and specific action statements whenever possible. Do not ramble on.

Oddly enough, we have noticed some of the same barriers to effective PM programs as we noted for e-mail. Many PMs are not understood, and not used as intended, because they do not communicate to the end user as effectively as they should. Our suggestion: Many of the same guidelines for e-mail effectiveness will likely result in more effective PMs in your plant. In the information age, communications will be a fundamental, underlying, key to plant and equipment reliability. MT

Continue Reading →

203

3:49 am
October 2, 1998
Print Friendly

Dollars: The Only Real Measure of Equipment Effectiveness

I’ve been talking so much about the necessity to build equipment effectiveness on a sturdy financial base that people are accusing me of being a “bean counter.”

I must admit it is a bit frustrating to believe very strongly in a concept that others may not consider especially important. After numerous discussions and presentations over the past couple years, I’ve decided to try a sports metaphor to better explain the opportunity, challenge, and threat.

Ever consider how sports might change and how differently they would be played if no one cared about the score? Think about football. Fourth and goal with 40 seconds to play. There will be major differences in the play call if the team with the ball is behind by 2, behind by 3, behind by 4, or ahead by 20. Consider set and match points in tennis The point is played very differently if the score is 40-love or love-40. Almost any sport you can think of is the same. Watch the last minutes of a close professional basketball game and try to convince anyone that score doesn’t matter.

You may be thinking all this is very interesting but how does it apply to maintenance? I believe the answer is simple. Maintenance is scored in financial terms by those who count–the people who sign your checks. Maintenance costs as a percentage of replacement asset value is one widely used measure–there are others. Whether you agree or not, are comfortable or not, the fact is that we and our effectiveness are scored by money–how much was spent last year, how much will be spent this year, and if B is greater than A, your job may be in jeopardy.

Most equipment practitioners aren’t allowed to spend money without a guaranteed return of at least 30 to 40 percent. And this is why the so-called streamlined reliability centered maintenance (RCM) has become popular. RCM without initial prioritization can be hugely expensive. There are many stories of an expensive RCM that resulted in added maintenance to avoid unlikely failures on nonvital equipment with a long history of reliability operation and low cost of failure. Keith Mobley recently wrote of a survey where nearly 51 percent of respondents reported their predictive maintenance programs did not return costs.

What’s wrong in these pictures? The answer is that many people measure success in technical terms such as preventive maintenance completed, regardless of whether value is added and flawed bearings are identified and replaced. When dollars are the only score with any importance to executives, measures like these are like having the best passing statistics on a winning football team.

Most people are well aware of the characteristics and pitfalls of a cost center. In a cost center, everyone knows the reward for ending the year under budget. Foolish action taken late in the year to avoid finishing below budget was recently illustrated in the comic strip rendition of management incompetence–Dilbert. If our game is being scored in dollars, let’s figure out a way to use the scoring system to demonstrate conclusively that we can make more money for our companies by using better methods.

As mentioned, cost per replacement asset value is a frequently used scoring measure for maintenance effectiveness. This measure says we must reduce costs to some arbitrary value. What would be the reaction if we could begin from this measure to demonstrate conclusively that we could make more money for our company by playing the game from a profit perspective? The game would be scored on effectiveness measured in opportunities gained, increased output, quality, and profit rather that cost. Some are doing just that with spectacular results.

In a prior editorial I mentioned economic value added (EVA) as a more comprehensive measure of value and equipment effectiveness. The complete paper is posted on the MIMOSA web site: www.mimosa.org. If you think we ought to be playing a profit center game and scoring results in terms of effectiveness, please take a look and let me know what you think about EVA and producer value as a beginning. It certainly isn’t the complete answer but it’s probably part of the answer.

I’m convinced that if we want to be recognized for our contribution to enterprise profitability, we need to change mentality from cost to profit, and demand a scoring system that conclusively demonstrates real contribution in financial terms. MT

Continue Reading →

1630

7:15 pm
October 1, 1998
Print Friendly

Using Ultrasound To Gauge Internal Corrosion

Factors to consider when selecting and using ultrasonic gauges to measure remaining pipe and tank wall thickness.

A particularly important problem that faces many industries is measurement of remaining wall thickness in pipes, tubes, tanks, and structural members subject to corrosion. Such corrosion is often not detectable by visual inspection, even when the area is accessible. If undetected over a period of time, corrosion will weaken walls and possibly lead to failures, some with dire safety, economic, or environmental consequences. Ultrasonic testing is a widely accepted nondestructive method for performing this inspection, permitting quick and reliable measurement of thickness without requiring access to both sides of a part.

This article focuses on a class of ultrasonic instruments often referred to as corrosion thickness gauges. These commonly handheld gauges digitally display the thickness of the remaining wall thickness of the part. They usually employ a dual element transducer (or dual probe), which is normally used for corrosion survey work rather than precision gaging work. Dual element transducers are typically rugged and able to withstand high temperatures, and are highly sensitive to detection of pitting or other localized thinning conditions. As their name implies, dual element transducers use a pair of separate piezoelectric elements, one for transmitting and one for receiving, bonded to separate delay lines cut at an angle.

A pulse-echo ultrasonic thickness gauge determines the thickness of a part or structure by accurately measuring the time required for a short ultrasonic pulse generated by a transducer to travel through the thickness of the material, reflect from the back or inside surface, and be returned to the transducer. In most applications this time interval is a few microseconds or less. The measured two-way transit time is divided by two to account for the down-and-back travel path, and then multiplied by the velocity of sound in the test material.

Standard industry practice has been to use dual element transducers for corrosion survey work, particularly when the inside surface of the test piece is pitted or rough. It is the irregular surfaces that are frequently encountered in corrosion situations that give dual element transducers an advantage over single element transducers. All ultrasonic gaging involves timing the round trip of a sound pulse in a test material. Because solid metal has an acoustic impedance that differs from that of gasses, liquids, or corrosion products such as scale or rust, the sound pulse will reflect from the far surface of the remaining metal. The test instrument is programmed with the velocity of sound in the test material, and computes the wall thickness.

Dual element transducers incorporate separate transmitting and receiving elements, set at an angle, so that the transmitting and receiving beam paths cross beneath the surface of the test piece. This crossed-beam design of dual element transducers provides a pseudo focusing effect that optimizes measurement of minimum wall thickness in corrosion applications. The dual element units are more sensitive than single element transducers to echoes from the base of pits that represent minimum remaining wall thickness. Also, they often may be used more effectively on rough outside surfaces. Couplant trapped in pockets on rough sound entry surfaces can produce long, ringing interface echoes that interfere with the near surface resolution of single element transducers. With a dual element unit, the receiver element is unlikely to pick up this false echo. Finally, dual element transducers may be designed for high temperature measurements that would damage single element contact transducers.

Modern corrosion thickness gauges incorporate internal data logging functions that can be used for statistical analysis of stored thickness data. Documentation capabilities may range from simple printouts of thickness readings to the transfer of data to a computer to generate powerful three-dimensional, color-coded grid files. Some instruments feature on-screen comparison of current thickness readings vs. previous readings, which is ideal for monitoring the degree of wall thinning.
The following general principles apply to all corrosion measurements with dual element transducers, whether used with a thickness gauge or a flaw detector. In all cases, the instrument must be properly calibrated for sound velocity and zero offset in accordance with the procedure found in the instrument’s operating manual.

Transducer selection
For any ultrasonic measurement system (transducer plus thickness gauge or flaw detector), there will be a minimum material thickness below which valid measurements will not be possible. Transducers at higher frequencies are capable of measuring thinner parts. In corrosion applications, where minimum remaining wall thickness is normally the parameter to be measured, it is particularly important to be aware of the specified range of the transducer being used. If a dual element transducer is used to measure a test piece that is below its designed minimum range, the gauge may detect invalid echoes and display an incorrectly high thickness reading.

In selecting a transducer for a corrosion application it is also necessary to consider the temperature of the material to be measured. Not all dual element transducers are designed for high-temperature measurements. Using a transducer on a material whose temperature is beyond the unit’s specified range can damage or destroy the transducer.

Surface condition
Loose or flaking scale, rust, corrosion, or dirt on the outside surface of a test piece will interfere with the coupling of sound energy from the transducer into the test material. Thus, any loose debris of this sort should be cleaned from the specimen with a wire brush or file before measurements are attempted. Generally it is possible to make corrosion measurements through thin layers of rust, as long as the rust is smooth and well bonded to the metal below. Some very rough cast or corroded surfaces may have to be filed or sanded smooth in order to insure proper sound coupling.

Severe pitting on the outside surface of a pipe or tank can be a problem. On some rough surfaces, the use of a gel or grease rather than a liquid couplant will help transmit sound energy into the test piece. In extreme cases it will be necessary to file or grind the surface sufficiently flat to permit contact with the face of the transducer. In applications where deep pitting occurs on the outside of a pipe or tank it is usually necessary to measure remaining metal thickness from the base of the pits to the inside wall. There are sophisticated ultrasonic techniques utilizing focused immersion transducers that can measure directly from the base of the pit to the inside wall, but this is generally not practical for field work. The conventional technique is to measure externally unpitted metal thickness ultrasonically, measure pit depth mechanically, and subtract the pit depth from the measured wall thickness. Alternately, one can file or grind the surface down to the base of the pits and measure normally.

Transducer positioning, alignment
For proper sound coupling the transducer must be pressed firmly against the test surface. On small diameter cylindrical surfaces such as pipes, the transducer should be held so the sound barrier material, visible on the probe face, is aligned perpendicular to the center axis of the pipe.

An ultrasonic test measures thickness at only one point within the beam of the transducer, yet wall thickness often varies considerably in corrosion situations. Test procedures usually call for making a number of measurements within a defined area and establishing a minimum and/or average thickness. Ideally, data should be taken at increments no greater than half the diameter of the transducer to insure that no pits or other local variations in wall thickness are missed. It is up to the user to define a pattern of data collection appropriate to the needs of a given application. This is normally not possible; instead a significant statistical sampling of data points is often taken.

High temperature measurements
Corrosion measurements at elevated temperatures require special consideration. The following points should be considered:

  • Check that the surface temperature of the test piece is less than the maximum specified temperature for the transducer and couplant to be used. Some dual element transducers are designed for room temperature measurements only.
  • Use a couplant rated for the temperature of the test surface. All high temperature couplants will boil off at some temperature, leaving a hard residue that will not transmit sound energy.
  • Make measurements quickly and allow the transducer body to cool between readings. High temperature dual element transducers have delay lines made of thermally tolerant material, but with continuous exposure to very high temperatures the inside of the probe will heat to a point where it eventually will destroy the transducer.
  • Both material sound velocity and transducer zero offsets will change with temperature. For maximum accuracy at high temperatures, velocity calibration should be performed using a section of the test bar of known thickness heated to the temperature where measurements are to be performed. Quality thickness gauges have a semi-automatic zero function that can be employed to adjust zero setting at high temperatures.

Gauges and flaw detectors
An ultrasonic corrosion gauge is designed to detect and measure echoes reflected from the inside wall of a test piece. It is possible that material discontinuities such as flaws, cracks, voids, or laminations may produce echoes of sufficient amplitude to trigger the gauge, showing up as unusually thin measurements at particular spots on a test piece.

Corrosion gauges that incorporate waveform displays can be very useful in detecting these conditions. However, a corrosion gauge is not designed for flaw or crack detection, and cannot be relied upon to detect material discontinuities. A proper evaluation of material discontinuities requires an ultrasonic flaw detector used by a properly trained operator. In general, any unexplained readings by a corrosion thickness gauge merit further testing with a flaw detector. MT


Information supplied by Meindert Anderson, Nondestructive Testing Division of Panametrics, 211 Crescent St., Waltham, MA 02453; (800) 225-8330

What Is Ultrasound? Sound energy can be generated over a broad frequency spectrum. Audible sound, for example, is restricted to a low frequency range with a typical upper limit of 20,000 cycles/sec, or 20 kHz. Ultrasound is sound at frequencies above 20 kHz, too high to be detected by normal human hearing. Corrosion thickness gauges typically operate at much higher frequencies, ranging from 1 MHz to 10 MHz.
Why Ultrasonic Testing?Ultrasound—because of its short wavelength—has the advantage that it can make very accurate thickness measurements on metals (as well as on plastics, glass, rubber, and other engineering materials). Equally important, measurements are nondestructive and allow an inspector to obtain wall thickness from one side without having to cut the test piece open. Measurements are repeatable, meaning an inspector has the ability to perform the same inspection at various time intervals and monitor the degree of wall thinning. Ultrasonic thickness gauges can play a vital role in the predictive or preventive maintenance of pipes, tanks, or other metal structures subject to corrosion, erosion, or pitting.
Through Paint, Echo-To-Echo Thickness Measurements

Recent advances in the design of ultrasonic corrosion thickness gauges utilizing dual element transducers have made it possible to take accurate metal thickness measurements with no need to remove paint or coatings. This feature is often referred to as echo-to-echo thickness measurements.

Traditional ultrasonic corrosion gauges make thickness measurements by determining pulse transit time to the first backwall echo. This technique generally works very well, except for the specialized case where the surface of the pipe or tank is covered with a layer of paint or other coating. In these cases, traditional corrosion gauges will measure the total thickness of both the coating and the metal substrate. Because paint and similar coatings normally have a sound velocity that is much slower than the metal substrate, a coating will usually add two to three times its actual thickness to the total ultrasonic reading. Therefore, inspectors often may have to remove the paint or other coating in order to get true metal thickness readings. This often proves to be very time consuming, and usually the measurement point has to be repainted as well.

Until recently, to avoid this measurement problem without having to remove the coating, inspectors had to rely on flaw detectors to make thickness readings utilizing the multiple backwall echoes that many metal test pieces produce. This technique works well, but requires more operator skill as well as heavier and more expensive equipment. Now inspectors can use handheld thickness gauges for these types of measurements as long as these gauges have the echo-to-echo feature.

Continue Reading →

472

5:28 pm
October 1, 1998
Print Friendly

ADC for Maintenance Management

Automatic data collection technologies are ready to enhance data entry for the information-driven maintenance organization.

 

optical_scanner

An optical scanner reads a bar code that provides data about the equipment. Bar codes can also support work orders, parts inventory, asset tracking, and labor reporting. (Photograph courtesy Tiscor.)

Like any other mission-critical activity, maintenance management is driven by information. Computer systems automate many aspects of a maintenance operation, usually relying on keyboard input and paper output to collect and disseminate information. However, there are situations where the entry and publication of maintenance data can be automated. Given the right situation and proper implementation, automation can significantly enhance the effectiveness of a maintenance operation.

 

Automatic data collection (ADC) is the process of automating the entry and dissemination of computer-based information. It is an assortment of technologies that provide a machine-based alternative to keyboard entry. It includes bar codes, touch memory, magnetic stripe cards, radio frequency communication, and voice recognition.

Hardware and software vendors have just started to recognize the potential of ADC in maintenance management. At the beginning of the decade only a few computerized maintenance management system (CMMS) vendors provided bar coding modules. Today, most major CMMS vendors support bar coding. Some have introduced products using touch memory and pen-based computers. The number of ADC maintenance management solutions will continue to grow with advancing technology and the need to increase productivity.

Common elements in ADC
ADC maintenance management applications generally have four common elements regardless of the technology used. They are collection medium, reading and writing devices, terminals and data communication, and application software.
The collection medium is the physical vehicle for storing or transmitting information. Bar codes, touch memory buttons, radio frequency identification (RF/ID) tags, and speech are collection mediums.

touch_memory_technology

Technician uses touch memory technology to collect and log data. (Photograph courtesy Diversified Systems Group.)

Reading and writing devices are used to retrieve and store information in the collection medium. Bar code scanners, bar code printers, magnetic stripe readers, and microphones are examples of reading and writing devices.

Terminals provide a mechanism for users to interact with the collection process and application software. Fixed terminals communicate with a computer system through cabling and wires. Batch terminals are portable and require users to physically place the terminal in a cradle or docking station in order to upload and download data from the target computer system. Radio frequency (RF) terminals also provide portability, but allow users to send and receive on a real-time basis. Terminals vary in processing power from simple storage devices to portable computers complete with keyboard and display.

The application software is generally a CMMS. However, other software such as predictive maintenance analysis and stand-alone inventory control packages can support ADC. Commercially available software packages do not inherently support ADC; vendors must design and develop a special program code in their products in order to support it. Information technology departments and system integrators can custom build stand-alone ADC solutions or, in certain instances, integrate ADC technology into an existing application package.

ADC technologies
Bar codes remain the most popular ADC technology used in maintenance management. There are bar coding solutions for just about every maintenance system application that requires the entry of a predetermined set of values such as work order numbers or failure codes. However, other technologies are starting to make an appearance. They include two-dimensional bar codes, touch memory, magnetic stripe and smart cards, radio frequency and wireless communications, portable pen-based computers and personal digital assistants, and voice recognition.

Each technology has its own set of unique capabilities and a cost threshold that can make it appropriate for some applications and not others. Many applications use a combination of the technologies, while others can be addressed by only one particular technology.

ADC maintenance management applications are not restricted to the technologies listed previously. Touch screen computers and optical character recognition are integral components of many electronic document management systems. Biometrics provides the ability to secure access to facilities and financial transactions based on fingerprint or retinal scans. Infrared remains a popular wireless communication mechanism.

Bar codes. Bar coding is an accepted, if not common, practice in maintenance management. Bar codes can support work order processing, inventory control, tool tracking, asset management, and labor reporting. A bar code’s pattern of alternating dark stripes and light spaces allows key data elements such as work order numbers, part numbers, and failure codes to be encoded on a piece of paper or label. An optical scanning device reads the bar code by illuminating the pattern and translating the resulting reflection into a data stream. Traditional bar codes store a relatively small amount of information in linear patterns of bars and spaces.
There are several two-dimensional bar code symbologies available, with PDF 417 generally recognized as the standard for maintenance applications. It allows up to 1800 characters to be encoded into a single bar code symbol.

Touch memory. Touch memory devices store detailed information in a format that can be directly attached to an equipment item. As the name implies, a probe must physically touch the storage device in order to transfer information to or from a data collection terminal. Touch memory buttons come in a variety of models rated according to their storage capacity, ranging from 1000 to 64,000 characters of data. There are two types of touch memory: read-only and read-write. In read-write format, a touch memory device is especially suited for logging predictive maintenance and repair activities. Its electronically accessible serial number makes it an ideal vehicle for confirming that a craftsman was actually at the job site. Its relatively low cost, ruggedness, and ease of use make it attractive for many applications.

Magnetic stripe. Magnetic stripe technology employs magnetic material typically applied to a credit-card-size piece of plastic as the data collection medium. Information is encoded by alternating the polarity of small sections of the stripe. Magnetic stripe technology is often used in maintenance for time and attendance, procurement, and security access applications. When an employee identifier is encoded on a magnetic stripe card, it can be used to control and track access to unmanned storerooms and tool dispensing machines.

Smart cards. Smart cards employ the same technologies utilized by touch memory and RF/ID to store large amounts of data. Some smart cards require physical contact for read-write operations. Others transmit or receive data in the same manner as RF/ID tags. Their potential uses in maintenance include purchasing control, security, and tool management. Their ability to retain data makes the cards attractive for procurement activities by allowing work order or accounting data to be captured as each purchase is made.

Radio frequency. Radio frequency data communication (RF/DC) is a term used by ADC vendors to describe a wireless local area network where radio-enabled, hand-held, or vehicle-mounted terminals communicate with a base station connected to a host computer system or network. RF/DC provides maintenance applications with interactive verification and control. Users can be directed to perform an action on an as-needed basis and data can be verified against a host-system database as soon as it is scanned. These capabilities make it popular for warehouse management systems and for situations where maintenance personnel at job sites require instant access to a centralized database but physical cabling is impractical.

Wireless technology. Wireless wide area network (WAN) systems employ radio and cellular packet data communications services to connect mobile users to a central system. CMMS vendors have just begun to introduce WAN-based solutions that support users at remote job sites. These solutions typically feature notebook computers and personal digital assistants equipped with wireless modems that communicate with the CMMS through the WAN service. They allow the remote user to interactively access work order requests, update work orders, view PM procedures, and check part availability in the CMMS.

Benefits Of ADC

Automatic data collection can benefit a maintenance organization by:

  • Reducing the time spent on data entry
  • Increasing the accuracy of maintenance information
  • Reducing paperwork
  • Identifying assets
  • Supplying information where it is needed
  • Providing an activity audit trail
  • Securing valuable resources

Using the technology
ADC maintenance applications will continue to grow in popularity as technology advances and the benefits become more widely known. However, maintenance organizations should carefully consider what their needs are now and for the future.
ADC technology is not a substitute for good management, competent craftspeople, proper techniques, or appropriate information systems. In order to be successful, ADC or any other information technology cannot be evaluated or implemented in a vacuum. It must be part of an organization-wide effort to achieve maintenance excellence. Before any ADC project can be considered, two key components must be in place—the strategic maintenance master plan and the CMMS needs assessment.

The strategic maintenance master plan establishes the overall maintenance goals and objectives within the organization based on a thorough assessment of current operations and practices. It defines the core elements by functional areas needed to achieve the goals and objectives and it identifies the necessary resources required for implementation. It also establishes the performance measures needed to justify the plan and manage its successful implementation.

The CMMS needs assessment identifies the information systems and resources required to support the strategic maintenance master plan and achieve maintenance excellence. It delineates the informational requirements of each functional area from work order management to cost reporting. It documents the informational flows within the maintenance department and between the maintenance department and other organizational entities. The needs assessment establishes the selection criteria used in evaluating any prospective solution and identifies the resources required for successful implementation.

The strategic maintenance master plan and CMMS needs assessment are part of an on-going process. Given today’s competitive environment and changing technology, no maintenance organization can afford to rest. The performance of the organization must constantly be measured against the benchmarks established by the master plan. The master plan must be periodically reviewed and revised.
Potential application of ADC technology should be part of the CMMS needs assessment process. Once the informational requirements and flows of the organization have been established, the suitability of ADC technology can be evaluated. Functional areas that are prime candidates for ADC technology, based on its potential benefits, can be identified and incorporated into the CMMS selection criteria.

However, the evaluation of ADC technology should not stop with the implementation of a CMMS package. Vendors constantly introduce new modules and enhancements. An ADC module that was not deemed necessary when a package was selected can become a viable solution a few years later. The need for ADC technology is not universal across all maintenance organizations. However, most organizations do need to evaluate its suitability to their operations when developing their CMMS needs assessment. Organizations that are truly interested in pursuing maintenance excellence should constantly look for the right opportunities to apply ADC technology. MT


Tom Singer is a project manager at Tompkins Associates, Inc., an engineering-based consulting firm, 2809 Millbrook Rd., Raleigh, NC 27616; telephone (919) 876-3667 Internet www.tompkinsinc.com

Continue Reading →

480

5:14 pm
October 1, 1998
Print Friendly

Measuring The Cost of Unreliability

A practical tool that allows managers to quickly understand the value of reliability and how reliability impacts profit.

Not long ago, reliability was considered engineering alchemy, an “Alice in Wonderland” science. Today, reliability is being treated as a true engineering discipline. It is such a popular term that it has given birth to an entire industry that has produced countless titles on the subject. Several professional societies have been founded and the lecture circuit is full of reliability engineers promising to decode the science of reliability.

Reliability and its design methodology have had a long and fruitful existence. They were employed in the 1940s and 1950s to design complex systems and measure risk in exotic military projects. In the 1960s, reliability tools were refined and became a base alloy in the program that saw Neil Armstrong place the first of what was to be many footprints on the moon. The 1970s brought with it the golden age of commercial nuclear power production. During this period, reliability stood as a silent sentinel to reactor design and associated safety systems design. Over the past two decades, reliability has made and continues to make its mark as a successful design characteristic in any process, system, or component.

Somewhere during the past 20 years, perhaps when words like Chernobyl, Bhopal, and Challenger filled the headlines, the expectations of industrial and manufacturing process plants were reordered and owners began to view their investments with a highly demanding economical eye. This is not to say that economics was never the top order of the day, but the emphasis and the associated costs placed on environmental protection, process safety management, worker health, and plant availability sounded the wake-up call. This forced owners and managers to look at new ways to keep their plants profitable. It was then that the forgotten stepchild known as maintenance was given the recognition it deserved. If keeping the plant running and profitable were the kingdom, maintenance would need to be the keys to that kingdom.

Over a 30-year period, reliability-centered maintenance (RCM) would develop a strategic framework for addressing process failures using the civil airline industry as its teacher. John Moubray and his book Reliability-centered Maintenance (Industrial Press, New York, 1992) broke new ground by developing a systematic approach to understanding and preventing failure.

This book introduced the most revered of the maintenance acronyms—RCM—into the lexicon of maintenance and, almost single-handedly, produced some of the most sweeping changes in how equipment reliability was viewed within the maintenance function. RCM was shown to be a series of well researched and executed processes that promised a greater understanding of why things fail and, more importantly, how to take measures to prevent the consequences of failures.

A major problem with implementation of the RCM process is that it is often applied far too broadly to yield practical results, and the price for such a protracted endeavor is typically far more than an organization with serious equipment reliability issues can bear. (Moubray notes that “the quickest and biggest short-term returns are usually achieved when RCM is applied to assets or processes suffering from intractable problems which have serious consequences.”)

What is needed is a practical tool to allow managers to quickly understand the value of reliability and how reliability impacts profit. In 1993, H. Paul Barringer, a Houston-based reliability consultant, realized the difficulty of making the RCM process work and posed the question: “Can your plant afford a reliability improvement program?”

Barringer observed that few, if any, organizations could afford to employ the entire RCM process without first understanding how unreliability affects the bottom line.
Fortunately a practical reliability tool can be extracted from Moubray, Barringer, and the past 30 years of experience and research, and we will not need rocket scientists to use it in a cost-effective manner.

Defining reliability
Reliability is most commonly defined as the probability of equipment or a process to function without failure, when operated correctly, for a given period of time, under stated conditions. Simply put, the fewer equipment or process failures a plant sustains, the more reliable the plant.

In searching for a single-word definition, reliability is dependability. Many industries have the additional burden of ensuring that plant reliability is kept in the forefront of day-to-day operations. Employee safety, public approval, and demonstrated environmental safeguards lie at the very core of an industry’s existence.
The accident at Three Mile Island power plant is stark testimony that reliability, when used as a design characteristic, works. If Reactor-2 was designed without inherent stability and reliability, chances are you would be using a candle to read this article.

Thinking of reliability as an engineering problem, one can imagine a team of engineers searching for better equipment designs and working out solutions to eliminate weak points within system processes. When considering reliability from a business aspect, the focus shifts away from reliability and toward the financial issue of controlling the cost of unreliability. Quantifying reliability in this way sets the stage for the examination of operating risks when monetary values are included. Measuring the reliability of industrial processes and equipment by quantifying the cost of unreliability places reliability under the more-recognizable banner of business impact.

It is not a difficult thought process that leads us to the conclusion that higher plant reliability lies in the ability to reduce equipment failure costs. The motivation for a plant to improve reliability by addressing unreliability is clear: Reduce equipment failures, reduce costs due to unreliability, and generate more profit. It is under this preamble that a sound business commitment to plant reliability begins to step out of the shadows and take shape.

Measuring reliability
We have now defined reliability as a plant engineering characteristic, and, more importantly, defined it in terms of business impact. In order to improve reliability, we first must understand the very nature of its measurement—failure.
Moubray defines failure as “the inability of any asset to fulfil a function to a standard of performance which is acceptable to the user.” This is the definition that we will use, but we will move the definition vertically.

We shall define failure as the loss or reduction in performance of a system, process, or piece of equipment that causes a loss or reduction in the ability of the plant to meet a consumer demand. This definition focuses attention on the systems vital to making the plant profitable, while the standard definition could lead some people to believe that all equipment is equal. The loss of a pawn in a game of chess does not represent the loss of the game. It is a calculated risk taken in a strategic effort to win the game and it is, after all, a pawn. In other words, the probability of meeting consumer demand has been increased as equipment within a process is evaluated based on its impact to the financial health of the company.

Mathematically, reliability is the probability of any production-interrupting failure occurring over a given future time interval and is stated as:

R = e -lt
where:
R = Reliability
e = 2.71828 ···, the base of natural logarithms
l = Failure rate, the reciprocal of mean time between failure or 1/MTBF
t = Given time interval for which prediction is sought
For the purpose of calculating the cost of unreliability of industrial equipment, mean time between failure (MTBF) can be defined as the time interval of the study divided by the number of production-interrupting failure events recorded during the study.

The good, the bad, and the ugly
We have defined reliability (the good) as requiring the measurement of failure (the bad). There remains only one obstacle to putting the above equation to work. We must glean failure data from industries that do not understand how to accumulate coherent equipment failure data for the purpose of relating it to cost (the ugly).
Plant engineers and maintenance practitioners typically maintain that good failure data does not exist, or would require extraordinary effort to secure. This is simply not true. Failure data exists all around them in varying degrees of usefulness. Many plants have been accruing failure data under the guise of operating logs, work orders, environmental reports, etc. The force that drives the paradigm is that plant management does not see the data as a tool to solve problems and as a result rarely treats or analyzes the data in an economical manner. This is punctuated by the fact that operators, maintenance personnel, supervisors, and managers fail to acquire data in a manner conducive to analysis.

The net result is a vast bank of quite useful information, haphazardly recorded and poorly structured. When equipment or process failures cause enough of a financial concern to warrant study, engineers can look forward to hours of sifting piles of incoherent data in search of an answer.

Substantial amounts of failure data exist in various places awaiting use for improving the reliability of processes and equipment. Start with common sense data now, then couple it with a progressive data recovery program. With these elements in place, the road to an integrated and structured maintenance management program that recognizes plant reliability as its mission will no longer be elusive.
Acquiring failure data

Robert Abernethy in his book, The New Weinbull Handbook (self-published, North Palm Beach, FL, 1996), maintains that acquiring equipment failure data has three basic requirements:

  1. A clear definition of failure.
  2. The definition of an unambiguous time origin.
  3. The definition of a scale to measure the passage of time.

He goes on to explain that commercial businesses require the addition of two elements:

  1. A measurement defining the cost of the failure.
  2. A method by which data can be analyzed.

In order to illustrate this concept, we need to get back to basics. It is a common philosophy (especially among investors) that the mission of the maintenance component of any facility is to keep the plant producing. In other words, protect the investment.

This translates well into the mission of reliability and gives us our newest characteristic: protect the integrity of the process. It can only follow that plant processes are maintained by protecting system function and system functions are protected by maintaining equipment.

In order to establish a beachhead for reliability improvement, we need to define failure in terms of the overall mission. For ease of illustration, we shall consider the primary loop, the secondary loop, and the power transmission stages of power generation in a nuclear power plant as the three high-level processes under which failure has the greatest financial impact.

In order to hold the study to an unambiguous time interval, we shall fix the time for each process with consideration to quality of failure data available for that time interval, then normalize the failure rate.

The time interval calculation assumes that the plant runs 24 hours per day, 365 days per year or 8760 hours per year. The number of failures was counted for the time interval to calculate the MTBF. Failure rate is calculated by taking the reciprocal of MTBF.

With the failure rates known, we can determine the production time lost from the failures and begin to determine the cost of unreliability.

In our example, we have established the three critical processes in making a power plant financially feasible. The criticality of the systems and equipment that make up these processes carries its own weight with regard to personnel and environmental safety. In understanding the financial ramifications of unreliability, it is important that the average corrective time for failures be determined for the purpose of estimating process downtime. This total average downtime equates to lost production time and, consequently, lost revenue.

In order to prove the value of this tool, the worth of its assumptions must be addressed. The most salient assumption must be that there is some net worth in examining the power generation process from the highest level. The purpose of a commercial power plant is not to answer the question: Are we smart enough to tame a nuclear fusion reaction in populated areas while not managing to render a 700-square-mile area inhabitable for 1.6 million years? The purpose is to supply electricity to the local grid for economic profit without rendering a 700-square-mile area inhabitable for 1.6 million years, even when individual equipment fails. Again, back to our chess game. We play, even though we know that individual pieces will be lost in pursuit of winning the game. Costs due to reliability quantify the losses expected from playing the game.

It also must be assumed that the number of failures in any given time interval will generally follow true to history. Unless some extraordinary effort is taken, the number of failures will not change. Corrective repair times will remain relatively constant for the same reason.

To make the translations to the cost of unreliability there is a question that needs to be answered. Should the costs of scheduled outages be included in the cost of unreliability?

Absolutely, for two reasons: For an investor, the plant is in failure mode, and the plant has been skewered with a double-edged sword, buried to the hilt. It is not on the local power grid making money and it is spending money rapidly to renew its assets. These facts must be accepted when placing a dollar value on a plant.
Assuming that 10 megawatts of electrical capacity translates into $5 million of potential gross profit, a nuclear power plant rated at 1200 electrical megawatts of output will yield a gross margin of $600 million per year or $68,493.15 per hour. When this loss is multiplied by the lost time due to failure, the hammer of unreliability is felt hard upon the anvil of business impact. The blacksmith takes another stroke when the cost of maintenance is added to gross margin loss.

Here we have represented the primary loop as a $25,000 per hour maintenance cost burden, the secondary loop as a $15,000 per hour cost burden, and the power transmission loop as an $8,500 per hour cost burden. These maintenance costs take into account the price of working with radioactive materials, additional personnel training and equipment, and the cost of returning the plant to full power operations. When the lost time due to the failure of the process is put into financial terms, it becomes apparent the cost of unreliability represents a substantial burden on the economic feasibility of the plant.

From this data model, two highly revealing values can be calculated—annual plant availability (the time that the plant has the opportunity to make money) and plant reliability (the probability that the plant will cost money).

Availability = Uptime
Total Time = 8760 – 78 = 99.1 percent
8760

R = e -lt
R = e -(399.55 x 10-6 x 8760) = 0.031
= 3.01 percent

These numbers speak volumes. These calculations show that while the plant is generally available to produce electricity, it has only a 3 percent probability of meeting a year-long operational commitment without incurring a forced outage or reduction in power generation. The price for this plant reliability comes to $6.8 million. This is the cost of unreliability.

It is easy to see why many power organizations publish quarterly plant availability reports to their boards of directors showing availability to be high while complaining that the price of maintenance continues to be excessive. The real truth of the matter is that owners are spending inordinate amounts of money to pay for a number that, when taken alone, means little to the bottom line.

We have presented a practical and simple tool for understanding why reliability is a vital ingredient of plant operations and maintenance. What started as an esoteric term for design engineers has become a signpost pointing the way to the high country. Knowing the cost of unreliability and where, within the context of process criticality, these costs are incurred will allow plant management to address and prioritize process failure issues, knowing the financial impact to their plant. MT


Ray Dunn is vice president of physical asset management at InfoMC, Inc., 2009 Renaissance Blvd., Suite 100, King of Prussia, PA 19406; (610) 292-8002 ext. 102; e-mail rayd@infomc.com

Continue Reading →

Navigation