Archive | October, 2005


6:00 am
October 1, 2005
Print Friendly

Precision Alignment Implementation

Arizona Chemical (Arizona) operates 14 manufacturing locations worldwide. In the “good old days,” each plant was relatively well assured of a production basis each year. As markets matured, distribution improved and customer requirements tightened, however, Arizona had to change to remain competitive. The sites now compete with sister plants for production capacity, as well as for capital funding. Production is assigned (scheduled) based on many factors. One of these is equipment reliability.

The site referenced in this article is in Savannah, GA. It was once part of a paper mill operation. Chemicals produced at the time of this writing were actually a side benefit of the paper-making process in that they extracted valuable products from the pine oils released in processing wood for paper making. Most of the mechanics had come from the paper side of the plant, as had their maintenance practices.

The paper industry is notable for its early adoption of precision alignment techniques. Starting with dial indicators, this segment was among the first to employ the rim-and-face technique. Later on, the reverse-rim method gained acceptance. When laser alignment systems were developed almost 20 years ago, the paper industry was among the first to embrace the new technology.

Regardless of how good they are, though, technology and tools alone are not enough. There is a need for a cultural change. People have to believe that there is a better way. They have to believe that “doing things differently” actually will make a difference in their plant’s operation, its competitiveness and their own livelihoods. This cultural change is what made the difference at the Savannah plant.

Getting an edge
Project Advantage is a corporate-wide initiative that International Paper (IP) has made available to its operating units. The key to its success is local “buy in.” Without the commitment of a plant’s management, the program is not started. But, it also is not a top-down type of mandate. Rather it offers a “better way” that plant management can embrace. In return for their commitment and support, IP provides training and guidance to make the initiative work. Further, Project Advantage doesn’t just cover maintenance. Instead, it touches all aspects of a unit’s operation (e.g., production, engineering, shipping and receiving, customer support, etc.).

The management of Arizona’s Savannah plant decided in 1999 to adopt key elements of Project Advantage to improve their operations. Besides the maintenance function, efforts also were undertaken in production and administrative areas.

Several years later, the plant’s maintenance superintendent began identifying key failures, pump work and other failures. Uptime was good, but significant resources still were being dedicated to repairs and equipment rebuilds. Almost all pumping capacity was backed up with spares, so the impact of failures could be minimized. In February of 2003, an introductory meeting was held at the plant. This meeting was conducted by IP corporate personnel who were champions of Project Advantage.

Based on the initial presentation, plant management elected to participate in the program. This meant committing resources to improving several key areas of the plant. Project Advantage encompassed about 30 different operational areas. The four on which the plant chose to focus were precision maintenance, operator-based reliability (ODR), work systems and root cause failure analysis (RCFA). In the maintenance arena, this meant adopting a precision process. Methods and tools would have to change in order that maintenance could be performed in a precision fashion.

The chemical plant cooperated with the paper mill on precision training as a way of leveraging scarce resources. At the time, there were approximately 35 mechanics, instrumentation technicians, pipe fitters, electricians and millwrights. All of these people were trained. They learned that installing precision would prevent rework and enhance the reliability of their operations. Management approached this as a solid investment that would yield returns, and backed it up with budgets for tools and training. A recap of the type of training provided is described below.

Precision maintenance/alignment
Keeping in mind that precision alignment is a subset of precision maintenance, several related topics were addressed in the training. It was natural at a chemical plant for mechanical seal and pump training to be provided. This training improved the knowledge and skill level regarding seal installation, maintenance and performance during operation, which, in turn, increased availability and reliability.

Among the leading causes of machinery failures are installation/assembly errors. Within the scope of Project Advantage, significant time was devoted to teaching how to avoid those types of errors. Personnel were taught how to determine the proper shaft fit for common applications (e.g., bearings, sheaves, impellers, etc.). They also learned how to reduce and prevent fit errors with precision measurement tools, such as depth, inside and outside micrometers, telescoping gauges, dial indicators, radius gauges, torque wrenches and digital “Vernier” style calipers.

The basics
Precision alignment was approached from the basics. First of all, a clear understanding of the objective of precision alignment was taught and demonstratedÐto measure and position two or more machines such that their rotational centerlines are within tolerance when the machines are at operating temperatures and conditions. It was found that there had been several different definitions “floating around” the plant. Thus, by obtaining buy-in to one common definition, it was easier to work to a common goalÐsomething that seemed quite obvious, but was not always achievable!

Part of the basics training included learning how to graphically plot alignment conditions and results. Before the acquisition of any (laser) alignment systems, however, dial indicator methods were taught. This reinforced the fundamentals of alignment and ensured that the plant did not rely on the availability of laser technology to secure the benefits of precision alignment. Some fundamental concepts clearly had to be learned and understood before an effective precision alignment program could be implemented. The plant determined that those responsible for the alignment of machinery would, at a minimum, need to understand:

  • Basic math functions (addition, subtraction, adding & subtracting positive/negative numbers, multiplication and division)
  • How a dial indicator works
  • Rotational centerlines
  • Pre-alignment checks
  • Offset
  • Offset misalignment
  • Angularity
  • Power planes
  • Correction planes
  • Horizontal
  • Vertical

A process was taught, starting with a prescribed set of pre-alignment steps and stages to better secure a precise alignment. Participants were taught how to understand and be able to prove relative shaft centerline-to-centerline position. The focus on coupling condition was de-emphasized.

A key to the precision alignment process is addressing the critical pre-alignment checks, such as runout, correcting pipe strain, soft foot, rough alignment and establishing a torquing sequence. Skipping any of these steps can lead to a frustrating and unsuccessful alignment. By emphasizing these preparatory steps, and demonstrating their importance, mechanics would learn to take the time to prepare before aligning. This means that at times it takes longer to perform an alignment, but overall many more alignments are accomplished successfully.

Dial indicator methods
The rim-face and reverse-rim dial indicator methods were practiced. Technicians were taught to choose the right method for the job, as well as to how to check for bar sagÐand how to correct it. Gaining an understanding of what to do when machinery becomes base-bound or bolt-bound was especially useful. The graphical solution method has proven very useful for solving base-bound situations. By learning the “old way,” alignment fundamentals were reinforced and this paved the way for successful adoption of laser alignment tools.

Alignment tolerances
Alignment tolerances were also explored during the Project Advantage training initiative. When the program began, there was some confusion about how to interpret an alignment tolerance chart and then how to properly apply these tolerances. An effective aligner must know how to use alignment tolerances. Simply relying on an instrument’s “idiot light” to tell one when machinery is aligned is not a substitute for understanding the application of alignment tolerances. For the novice, it can lead to costly errors. The aligners needed to understand why tolerances are important. They were also taught how to take a set of tolerances from an OEM and convert them into useable parameters for the particular alignment instrumentation being used.

Dynamic movement
Realizing that all machinery moves as it goes from a state of rest to its operating temperature and conditions, time was spent discussing dynamic movements. Although most of the machinery that is routinely aligned only moves slightly, there are many machines that require offsets to account for dynamic movements. Unfortunately this concept had previously been neglected as part of the alignment process. The usual excuses included:

  • Because it doesn?t matter…
  • We always leave the motor 5 mils low…
  • We can calculate the growth…
  • We don’t have targets from the OEM…
  • It is too difficult and expensive to measure…

Dynamic movement does indeed matter. It can cause machinery to significantly deviate from an aligned condition as it goes from off-line to running. One can make an effort at calculating the thermal growth, but this is only part of the total dynamic movement. Keep in mind that there are reaction forces (such as dowels, piping, etc) that cannot be accounted for with thermal growth measurements. In addition, the horizontal movement (and machines don’t grow symmetrically!) cannot be calculated.

OEM-provided targets (if available) should be taken with a grain of salt. It has been found that they almost always provide for equal growth at the front and rear of machinery. Moreover, they almost never provide any guidance about horizontal movement. The only answer is to measure the true dynamic movement of the specific critical machinery. To meet this goal, the plant acquired special fixturing (OL2R Fixtures) and a laser system for measuring the dynamic movement on the specific machinery.

Finally, the value of documentation was emphasized to the participant. Documentation plays an important part in improving reliability. Forms have been created for the physical inspection, as well as for the installation and alignment process. Now, at the Savannah plant, all maintenance procedures are recorded on equipment-specific sheets and kept with the respective equipment’s file. This allows a mechanic to evaluate the history of a piece of equipment while preparing to perform a replacement or alignment. Documentation has been a key part of the process, as it also helps in communicating “wins” to other plant personnel and serves to maintain focus and momentum.

Selecting a laser alignment tool
From the precision maintenance training, the mechanics at this Arizona plant realized that most of the precision alignments could be accomplished with dial indicators. But. they also knew that a properly selected laser system offered too many compelling advantages to not be the standard for all precision alignments. The mechanics did not want an overly complicated (and feature-laden) laser alignment system, although they did tend to be somewhat gadget-oriented, to the point of always wanting more power, more options.

Next, the millwrights were involved in the evaluation and selection of a laser alignment tool. They knew that while many features from the various vendors were nice, when it came time to do a precision alignment, they probably only needed a small fraction of those features. They did not want to become bogged down and confused by all the bells and whistles that had seemed so necessary when they first looked at the laser systems. Such features can waste time and effectiveness if people operate the system with a trial-and-error approach, don’t ever become proficient or abandon the tool altogether. This ends up costing time and money with each and every alignment they perform. After the millwrights had their say as to which system they wanted, an easy-to-use laser alignment tool was selected.

A culture change
At Arizona Chemical in Savannah, there has been a marked change in people’s attitudes about precision alignment. They now see it as part of an overall effort that is improving the reliability of the site’s equipment. With management backing, there has been a consistent effort to move forward. There has been no back sliding; people have stopped taking short cuts with machinery alignments. Precision is now standard—and expected. Personal job satisfaction runs high at this plant, with the millwrights feeling as though they’re working in a professional manner. The following list outlines some of the benefits to date:

  • Prior to precision maintenance, the mode of operation was to simply replace parts; many pieces of equipment were spared, with quick change-outs allowing for continuous processes to remain operational. Now, however, failures have been reduced and the spared equipment is scheduled for regular operation, rather than being held for emergencies.
  • The plant initially started out having root cause meetings every week. Because of improvements, though, these meetings were rescheduled on a monthly basis.
  • Training on structured problem-solving was provided to some of the mechanics. This allowed them to take charge in resolving most equipment issues.
  • Additional training was planned, including precision alignment follow-up training. Even welders have gone through the precision maintenance class, gaining better appreciation of what millwrights do/need.
  • Plans were put in place for a pump shop designed specifically for rebuilds; providing a clean environment with all the necessary tools and fixturing for proper pump overhauls.

The mechanics at this plant have identified and outlined key factors they believe can make a precision alignment program more effective. There are many resources available (especially online) that anyone beginning a precision alignment program can tap into to explore how to leverage these key areas:

  • Securing management commitment
  • Gaining essential stakeholder “buy-in”
  • Developing training strategies (it needs to be thorough and on-going)
  • Obtaining agreement for a precision alignment objective
  • Teaching the basics of alignmentÐover and over again!
  • Always working to some agreed-upon alignment tolerances
  • Using documentation to build a complete machine history and to communicate “wins”
  • Taking dynamic movement into account—it will make a difference!
  • Fostering a culture change to one of “precision”
  • Carefully selecting laser alignment systemÐget what you need (forget about what you “want”)

EDITOR’S NOTE: This article is based on a presentation in Norfolk, VA, at the 12th Annual SMRP Conference, in October 2004.

Mark Garza is a reliability engineer at Arizona Chemical. His responsibilities include managing the predictive and preventive maintenance of plant equipment, managing a mechanical integrity program and providing technical support for the maintenance department.

Ron Sullivan has served as president of VibrAlign, Inc. since 1996. He joined this organization in 1989 as field service manager, responsible for supporting industrial customers with predictive maintenance consulting services, including: vibration analysis, training, field balancing, and laser alignment services; telephone: (800) 379-2250 ext. 103; e-mail:

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Does RCM Have To Be a Painful Experience?

RCM is widely regarded as the most comprehensive methodology used to understand how an asset can fail, and in turn, to determine what you have to do to mitigate the consequence of failure before it occurs. In its most effective and most widely accepted form, RCMII, this methodology consists of a very structured and rigorously applied seven questions that have to be answered in order to build an asset’s maintenance program. Yet, despite its proven track record, many view full-blown RCM as a painful process that is resource-intensive, expensive and difficult to implement. This does not have to be the case. By adopting some common-sense strategies before launching into RCM, the experience will be a positive one that fosters ownership and teamwork among those involved, while creating a comprehensive maintenance program.

The most common RCM “pain point” is that it takes years to implement across an entire plant. Certainly, if the consequence of failure is high (for example, if you are maintaining a nuclear submarine fleet and failure means death), I would advocate the need to apply RCM on the majority of assets. But, in most industrial settings, RCM would more likely be justified on only a small percentage of your assets (typically 15-25 percent). It simply is not practical to apply RCM to each and every asset.

Although RCM may be viewed as a daunting task, the solution is to determine which assets are actually the most costly in terms of consequences and risk to the business and to target them first. You can then migrate down the prioritized list until ultimately all critical assets have been addressed. This approach allows you to balance the resource-intensiveness of RCM with other less comprehensive work identification methodologies, such as Maintenance Task Analysis, that can be quickly applied across the plant.

I recommend breaking the RCM analysis process down into two-week-long initiatives. In other words, select a system or subsystem that a team can work through, dedicating half days over a period of two weeks. They should be able to complete the analysis within this time frame, stay fresh throughout the process and have part of each day to tend to their other responsibilities. At the end of this two-week initiative, you will have defined a comprehensive program that can be reviewed with management and then successfully implemented.

Immediately following the first two-week analysis, you should begin implementing its results. As this is taking place, you can simultaneously embark on selecting and implementing your second two-week RCM initiative.

Always remember that people are at the heart of any successful RCM initiative. RCM methodology merely provides a framework for asset analysis—it can’t determine the proper function of a key asset in your plant. People bring that knowledge to the process and are your greatest resource and key to success in defining the optimum maintenance program.

It is critical that teams include a cross-section of people from Maintenance, Operations and, often, Engineering to carry out the RCM process. It’s also very important to assess whether you want to use the same personnel for each analysis or bring new people into the process each time. I recommend a rotational approach to ensure as much as possible that everyone participates and feels ownership for the new asset maintenance programs.

While equipment knowledge is vital, RCM participants also must be properly trained and understand the basic methodology. Orientation courses offered at various levels as a part of getting started are essential if participants are going to understand RCM and talk the talk.

Another common criticism of RCM is that it results in lots of hardcopy data that will collect dust on a shelf and never really be utilized. Before launching into RCM, investigate and take advantage of the latest reliability software to help maximize the friendliness and utilization of the output from your analyses. For newly-identified proactive tasks, condition data may be collected from a wide variety of sources ranging from predictive tools and handheld data collectors to manual check sheets. With the large amounts of data generated, you’ll only want to focus on the non-normal values collected. If RCM is made easy to implement and people see success quickly, they will gain a lasting appreciation for it and readily embrace it as a way of life.

RCM doesn’t have to be painful procedure. That’s only the case when you try to do too much at once. By breaking it down into “bite-size”‘ pieces and ensuring the proper process, tools and training, it can be a rewarding experience that delivers significant benefits to the organization. Enjoy the experience!

Al Weber, Ivara Corp.; e-mail: Al Weber is an internationally respected authority on reliability and a certified RCM2 Practitioner. He has more than 30 years experience in the field, including 27 as a key participant in the Eastman Kodak Maintenance & Reliability practices. Weber was a founding member of the Society of Maintenance and Reliability Professionals (SMRP).

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Reliability Analysis Software

An update on information systems for reliability techniques, including software that supports management strategies from RCM (Reliability Centered Maintenance) to FMEA (Failure Modes and Effects Analysis)

Once an organization has basic maintenance strategies in place, such as preventive maintenance, inventory and purchasing practices, work processes and computerization of the maintenance business, it begins to consider how to further improve maintenance processes. One commonly-used strategy is to increase equipment reliability. Such organizations will begin to focus on equipment or assets that, if they fail, will have significant negative impact on:

  • Asset and employee safety
  • Environmental safety & compliance
  • Regulatory compliance (FDA, EPA, OSHA, etc.)
  • Plant throughput
  • Plant efficiency

Reliability-centered maintenance (RCM) is a systematic approach to developing a focused, effective and cost-efficient preventive and predictive maintenance program. The RCM technique is best initiated early in the equipment design process and should evolve as the equipment design, development, construction, commissioning and operating activities progress.

This technique, however, also can be used to evaluate preventive and predictive maintenance programs for existing equipment systems with the objective of continuously improving these processes. The goals for an RCM program are:

  • Achieve maximum reliability, performance and safety of the equipment.
  • Restore equipment to required levels of performance when deterioration occurs (but before failure).
  • Collect the data (during the life of the equipment) to change design of the equipment to improve its reliability.
  • Accomplish the above while minimizing life-cycle costs.RCM methodology was developed in the 1960s primarily through the efforts of the commercial airline industry. The essence of this technique is a series of structured decision trees, which lead the analyst through a tailored logic in order to outline the most applicable preventive and predictive maintenance tasks. There are two main applications for RCM: equipment in the design phase and equipment already installed and in operation. For the purpose of this directory only RCM, RCA, and FMEA on existing equipment will be considered.

    RCM, RCA and FMEA for existing equipment
    As mentioned previously, conducting an RCM analysis for existing equipment centers around an RCM decision tree. While decision trees can be very complex, most organizations will begin by utilizing a simple approach, increasing the complexity as the analysts become more proficient.

    Using basic decision trees to start will allow analysts to gain insight into the RCM decision process if a failure occurs. Based on previous discussion of RCM for design, there are two types of information that may be considered at this point. The first information relates to theoretical failures. These are failures that have not yet occurred, but through a study of the design of the equipment are potential candidates.

    The second type of information (typically used with existing equipment) uses historical data about the equipment in question or similar equipment. This information indicates what failures have occurred in the past, as well as their frequency.

    Three key questions
    The first question to ask is, “Will safety, environmental or other regulatory issues be compromised?” If the answer is “yes,” then appropriate preventive or predictive maintenance tasks are developed.

    Preventive maintenance tasks are developed for situations in which failures can be prevented with proper lubrication, inspection, and adjustments.

    Predictive maintenance tasks are developed for situations in which failures cannot be prevented and, therefore, must be detected before they occur.

    If the answer to the first question is “no,” then the decision tree leads to the second question.

    The second question is “When the failure occurs, is there a loss of production or availability of the equipment that impacts the operation?” If the answer is “yes,” then the appropriate preventive or predictive maintenance tasks should be developed as outlined previously.

    If the answer to the second question is “no,” then the decision tree leads to the third question.

    The third question is, “Is the repair expensive, i.e., is there collateral damage?” This question is not just concerned with the component being examined; it wants to know if auxiliary equipment will be impacted. Consider a drive train, if a bearing fails—there may be more of a problem than bearing damage. The drive shaft could be scored or otherwise damaged, rendering it unsuitable for future use. Similarly, if a motor or generator is damaged, that could overload the electrical circuit, causing damage to the control system. Or there might be stoppages of other equipment due to shared electrical distribution. In considering a failure, it is important to take into account all related equipment.

    If the answer to the third question is “yes,” then the appropriate preventive or predictive maintenance tasks are determined.

    If the answer to all three questions is “no,” then running the component to failure is an acceptable option. Run-to-failure is acceptable in such cases because the decision tree analysis reveals, per the following criteria, that there will be little or no impact caused by the failure:

    • Regulatory or safety issues are not compromised.
    • Expensive loss of capacity is not incurred.
    • Life-cycle cost is not inflated.
    • Probability of failure is low.

    Root cause failure analysis
    The key to making RCM analysis effective is the ability to perform a root-cause failure analysis (RCFA). As previously described, the RCFA must be performed at two levels. The first is the theoretical level, which involves asking “what if” questions. The second is the historical level, which examines equipment histories for actual failures. In other words, root-cause failure analysis analyzes theoretical failures or actual failures to find their root causes so they can be eliminated. Without RCFA, improvements in equipment reliability by eliminating failures (either in the design or operating phase) could not take place.

    Up until this point, this article has focused on RCM software. It is important to note, however, that RCA and FMEA software is typically used during an RCM analysis—when the true root cause of failure must be identified. FMEA software also is used to determine the specific mode and effect of the failure.

    Reliability software
    The information about the software listed on the last two pages of this article was provided by the suppliers of the products and checked with their websites.

    Each description begins with a notation of whether the software developer intends the package to facilitate RCM, RCAs, FEMAs or all of these processes. (It should be noted that many of the CMMS/EAM software packages available today are already interfaced to these software packages.)

    As evidence of the evolution in this technology, we note Meridium and SAP addressing the fundamental challenge of integration of RCM software with RCMO—a new RCM solution tightly integrated with SAP (and built with the latest SAP technology). RCMOTM simplifies the process of implementing RCM with SAP, and it allows the analyst to measure performance and make adjustments over time.

    If you are currently utilizing other software not mentioned in your RCM, RCA or FMEA analysis, please contact us. E-mail so we can include it in the next edition of this directory.

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Lessons From Nature

Our recent natural disasters seem to have knocked us for a loop. At some point, critical oil and gas production and supply lines were shut down, affecting all consumers. Shipping and other commercial activities along the Gulf Coast were greatly reduced. It will be some time before a sense of normalcy returns.

Exacerbating the situation were several significant failures of our systems and processes. The failure of the levees around New Orleans allowed flooding that led to tremendous loss of life and property. Apparent lack of planning and implementation by just about everyone led to great sufferingÐand further loss of life and property. Grave miscalculations by risk management leaders led to confusion and chaos, fueling the various worst-case scenarios that we viewed on our television screens.

So what does all this have to do with Professional Development?
I think it is an easy leap to use these recent natural disaster situations as an analogy to our own industrial enterprises and what can occur if we are not adequately prepared. We have a chance to step back and consider the “what if’s” and determine how prepared we are for things facing us, whether they be natural disasters, old equipment, new processes or whatever. The problems that leaped from our national headlines concerning the aftermaths of Katrina and Rita should be considered a clarion call for us to consider the problems we all face in our industrial working lives. I believe we should be asking our organizations and ourselves some very probing questions.

Are our facilities and equipment systems designed and built for reliable operations? Were they designed to operate reliably in the type of situations they are being exposed to? Are they being maintained to a level that allows them to operate reliably? Have we conducted a risk analysis on our critical processes and equipment? Do we have plans in place to eliminate or mitigate failures? Are our people resources trained and ready to deal with situations? Do we have resources who understand reliability and maintenance concepts and can apply them to our particular situation?

In my last Professional Development Quarterly article, I wrote about how professional development drives our economic engines. Continuing in that framework, I think it is clear that our country’s economic engine took a significant hit as a result of the recent hurricanes.

Obviously, we can’t prevent natural occurrences like the devastating storms of a few weeks ago, but we can mitigate resulting damage somewhat through the use of a large number of tools available to us, including reliability design, risk management planning, etc. With these tools, we can prevent the more common “disasters” caused by poor planning, preparation or implementation.

We also should take this thought down to our own situations within the enterprises where we work. Have we utilized reliability design concepts? Have we developed and insisted on reliability specs for our equipment and processes? Have we developed a risk management scenario (at least for our critical processes)? Have we developed a maintenance system that utilizes modern concepts? Do we plan and schedule appropriately?

Perhaps the biggest question is do we have the human resources with the appropriate knowledge and skills to lead, develop, implement and sustain the type of systems and equipment to help ensure the smooth, reliable operation of our enterprise? If we have, then it is likely that our enterprise has a strong, well-defined professional development process for our people. If not, it is likely that our enterprise needs a much-improved professional development process.

There are numerous ways for each of us to continually work on our professional development, both individually and corporately. Conferences, short courses, university degree programs, specialized training programs and other resources abound. This magazine routinely identifies and catalogs many of these educational opportunities. Several of them even advertise in this publication.

I hope we all will take heed of what we have learned from the recent hurricane situations. Going forward, let’s make sure that each one of us is involved in some type of professional development program—honing our skills or learning new techniques to protect our enterprises, our communities, our families and ourselves—and aiding our economic engine.

Tom Byerley is Director of the Maintenance and Reliability Center at The University of Tennessee, an industry-sponsored center that promotes utilization of advanced maintenance and reliability technologies and management principles in industry. He also is currently Treasurer of The Society for Maintenance and Reliability Professionals (SMRP).

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Determining Client Needs: An Interview With SKF's John Yolton

Using the right “tools” to evaluate asset management improvement potential.

We recently had an opportunity to discuss benchmarking and asset management improvement techniques with an acknowledged expert in the field, John Yolton of SKF.

MT: Everywhere we go, maintenance professionals are talking about “benchmarking.” Why is it that so many companies seem to be obsessed with these numbers today?

Yolton: As anyone who has responsibility for asset management can tell you, comparisons abound within and across industries. A human trait for most people is to compare ourselves with others, therefore we are always going to want to be placed somewhere on a scale. In many cases that means a “world-class” scale.

Achieving world-class or best-in-class performance is the real goal. The gap or rather, closure of the gap, between a client site and world-class performance is the key to successful use of the benchmarking tool.

World-class indicators will always be moving targets, as they should, but they should also be a goal for which to strive, and around which to build a vision and a justification for a client site’s improvement effort.

MT: How do you address these issues at SKF?

Yolton: SKF Reliability Systems has developed a model of the Asset Efficiency Optimization (AEO) philosophy. The AEO process starts with development of maintenance strategies for equipment and processes at a client’s site based upon the business goals of the site. SRCM is one tool used for this development. Once these strategies have been created, whether it’s a PM task, or run-to-failure, the next part of the improvement process is to identify which work makes sense to perform, in order to meet the site’s business goals. Condition monitoring is a tool used for identification of necessary work.

Controlling the identified work is the next logical step in this improvement process. Generally, this is enabled by the deployment of a computerized maintenance management system (CMMS) and includes alignment with the client’s spare parts inventory.

Execution of the identified work is last. Contributing elements to this part of the model include skill levels of your personnel or outside contractors, expectations for the degree of quality of tasks performed by the client’s personnel or contractors and measurements of work quality, among others.

As with any process, occasional unexpected issues will arise following the completion of the tasks at hand, which then warrants adjustments to the overall program. This feedback, whe-ther in the form of an adjustment to PM frequencies or actual inspection tasks, for example, is referred to as the “living program.”

MT: It seems as though there is quite a lot to this improvement effort. In fact, to many companies, it may feel rather overwhelming. How would a company get started?

Yolton: Any improvement process begins with identification of the client site’s current state or present situation to help determine the gap between the existing situation and the future state or goal, which is depicted in the maintenance maturity diagram in Fig. 2.

In this diagram, the four stages of maintenance maturity are shown as Firefighting, Maintaining, Promoting and Innovating, each with its own individual characteristics of drivers, behaviors and reward systems.

As an example, it is not at all unusual to come across an organization that has developed excellent responsiveness to breakdowns, thereby minimizing the downtime associated with a failure. This type of organization typically flourishes with “heros” who are recognized by “attaboy” pats on the back and rewarded with extensive overtime opportunities.

At the other extreme is the innovative organization that has grown far beyond the mentality of merely fixing failures quickly. It has become proactive in eliminating root causes of potential failures, sometimes as early as the design phase, and it certainly uses redesign as an option for failure elimination. This type of continuous improvement includes a very active, structured and ongoing learning process.

MT: I can see how that might help a company understand where they are at in a relative sense. However, does the process get more specific? It doesn’t seem that this provides enough detail to start the improvement process.

Yolton: You are correct. When a client is ready for its specific improvement process, there is a tool for determining the site’s particular needs for improvement. It’s the Client Needs Analysis (CNA) and it’s based on the SKF Asset Efficiency Optimization (AEO) model explained earlier.

For each of the concept’s four facets, e.g., Strategy, Identification, Control and Execution, 10 carefully crafted questions are posed. The client’s responses to these questions are then compared to world-class best practices benchmarks that have been publicly presented and/or published by a variety of recognized organizations.

The answers to the 40 questions are quantified, based upon a scale derived specifically for each question from the world-class benchmarks noted above.

This “scoring” is provided in order to properly position the site’s current state within the four stages of maintenance maturity shown in Fig. 2.

The tool then provides a maturity matrix of the responses provided (Fig. 3). This matrix is invaluable in positioning the site’s focus for improvement efforts in that it helps personnel understand where their maintenance effort is in relationship to world-class asset optimization.

The maturity matrix aligns the scores with the four facets of the AEO concept and the maintenance maturity of the client’s organization, thus allowing analysis for developing an action plan for improvement. Further analysis is possible by comparing the organization’s response to those of its peers within their own industry or across others.

MT: This seems to get into what you mentioned earlier about it being human nature to compare ourselves with others. Most consulting groups have problems here since the databases they keep are not comprehensive enough to give a true industry representation. How do you overcome this problem?

Yolton: I admit we had that problem at first, too. By now, though, SKF has conducted over 500 individual site analyses covering five broad industry categories:

  • Pulp & Paper & Forest Products
  • Industrial – Discrete
  • Industrial – Continuous
  • Hydrocarbon Processing
  • Electric Power

Each analysis remains confidential within the SKF database, which is accessible only by authorized SKF personnel. Moreover, for reporting purposes, only the analysis number is used for identification.

MT: Could you give us an example of how the data is used?

Yolton: Here is a typical scenario. from the database of responses from pulp, paper and forest products (P&P&FP) surveys performed thus far (over 70 global responses). The question we asked was: “Considering all Preventive Maintenance (PM) tasks, how many are conducted by the operators?” (This is Question #16 of 40 and it is grouped in the Identification facet of the AEO concept.)

What we found was that the practice of using operators to perform PM tasks is not widespread within the paper industry. Only 10 percent of the re-sponses indicate a world-class best practice of having more than 25 percent of their PM tasks performed by operators, while more than 40 percent indicate they have no operators performing PM tasks.

The CNA also provides other graphic depictions of the site’s current state, For example, among the helpful graphic comparisons the CNA produces is a spider chart that shows the composite average response for each question for the P&P&FP industry. This allows us to compare the client’s response values to the industry. Other industries can be similarly displayed for cross-industry comparisons.

MT: What analysis could you draw from this type of data and diagrams for this client or market segment?

Yolton: In very general terms, in the P&P&FP industry, there appears to be ample opportunity for improvement in the Execution phase of the asset efficiency optimization process. This involves the training and skill levels of your technicians, as well as the level of testing and acceptance of the work performed. As an industry, Questions 31-40 reveal, on average, that few of our global responders are actively engaged in upgrading the execution of reliability improvement tasks.

The CNA supplies a spider chart of each site’s responses as well, so that it becomes more obvious where the strengths and weaknesses lie in an improvement effort.

Using data from our P&P&FP example, we can see that value of the scores for each facet from one specific site is quite high compared to world-class best practices in 21 of 40 questions. We also note that this site has particular strength in the Identification facet. Thus, we know that this site’s improvement action plan will focus on the obvious improvement areas, e.g., Strategy and Execution.

That, quite simply, is the value of the CNA program. It allows development of an action plan that focuses on the needs of a site. It also allows clients to determine their position relative to the average of the industry for each response. This leads to the refinement of the client’s improvement program based upon comparisons to the industry’s average.

Each regional SKF office (over 80 worldwide) has personnel trained to assist the client in performing this analysis. In many cases, industry specialists can be used to review the responses and suggest recommendations for improvement. Generally, a benefits value can be included.

To become better, each organization must know where it is starting. This Client Needs Analysis (CNA) process not only defines the starting point, it also helps guide the improvement plan.

MT: John, thanks for helping us understand the details of how one company is helping move its clients to maintenance and asset management best practice maturity.

(Editor’s Note: John Yolton is Maintenance Strategy Consultant for SKF’s Global Pulp & Paper Segment. He has over 23 years operating experience within pulp & paper and over 17 years of management and consulting experience with companies specializing in engineering, lubrication, sealing and CMMS/EAM solutions. He can be contacted directly at

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Look to System Reliability When Selecting Bearing Protection

Fewer things to fail translate into fewer failures.

Over the last 30 years, bearing protection has emerged as prime territory for increasing overall rotating equipment reliability. With metallurgy, tribology and bearing design having progressed to the point where further enhancements to bearings and lubrication will be incremental at best, the deceptively simple task of retaining lubricant in and contamination from the bearing housing remains the last zone for achieving significant gains in reliability.

Reliability defined
Though often used carelessly and inaccurately, the term “reliability” is really the mathematical probability that a device will “live” and perform for some time period. Quite simply, it’s the odds that a device will work for a given interval. The practice of reliability is all about identifying and implementing the products, practices and procedures that put those odds more in your favor.

To understand how reliability is calculated, we first must look at the product life cycle.

Product life is customarily described by the classic saddle or “bathtub” curve, which is broken into three distinct areas.

The first area describes the product’s infant mortality or “bad actor” phase. That is, whenever a population of devices is applied, there will initially be a greater rate of failure. Improper installation, defective products or other non-normal errors will manifest themselves as premature failures. These are the failures that manufacturers traditionally hope will be discovered during shakedowns, burn-ins and test runs.

The second area of the product life-cycle curve, after all the bad actors have been eliminated, is an area where the failure rate as a function of time will be more or less constant. This may be described as the useful product life phase.

The third and last area is the wear-out phase. Here again we will see an increase in the failure rate as devices reach their maximum life expectancy.

Reliability is calculated only on the middle or constant-failure-rate area of the product life cycle. To obtain an accurate and comparable measure of reliability, we need to study products or devices before they wear out naturallyÐand after the bad actors and damaged and defective devices have been shaken out of the population.

The formula for reliability as a function of time, Re(t), is:
The failure rate ƒ is the total number of device failures divided by the cumulative amount of run time for all devices.

ƒ = 670/(3000)(365) = 0.0006119

The value (t) is time for which we wish to know the probability of device survival.

Re (t) = e -00061119(250) = 0.858

The inverse of the failure rate, 1/ƒ, is the mean time between failures, or, the more commonly used MTBF.

A total of 670 failures were observed in a population of 3000 pumps over a period of 365 days. What is the probability of a pump lasting for 250 days?

The Failure rate ƒ is:
(Note: MTBF = 1/ƒ = 1/0.0006119 = 1634 pump-days.)

Reliability then is:
Re (system) = Re(Thrust Seal) x Re(Thrust Bearing)

This means there is an 86 percent chance that the pump in this population will survive 250 days. Keep in mind this also means there is a 14 percent chance that the pump will fail before that time. In other words, given a population of 3000 pumps, 420 pumps would be expected to fail prematurely.

System reliability
When the failure of any single device will result in failure of the total system, also called a series system, overall system reliability is calculated by multiplying the respective reliability of each individual component together. The failure of any individual component fails the entire system. This is analogous to a chain only being as strong as its weakest link. It’s a simple and important concept, but one that all-to-often remains overlooked.

For example, in the pump bearing housing shown in Fig. 2, the failure of either the radial or thrust bearing or radial or thrust seal will fail the system. (There are other components to consider as well, but to simplify this example we will use only four.)

The total reliability then is defined as:
Re (system) = 0.95 x 0.95 x 0.95 x 0.95

If each individual component had a reliability of 0.95, the total system reliability then is reduced to:
Re (system) = 0.95 x 0.95 x 1.0 x 1.0 = 0.90

No matter what is done to increase the reliability of individual components, in a system all the respective reliabilities are multiplied together. The key to system reliability then is not just to increase component reliability, but also to reduce the total number of multipliers in the reliability calculation.

The fewer reliability numbers we have to multiply together, the greater our overall system reliability. This is where selecting the right type of bearing protection will pay huge dividends. Non-contact, non-wearing bearing isolators have an infinite design life, or a reliability value Re = 1.0. If we eliminate finite-life contact-type seals from the foregoing example, the system reliability becomes:

Contact seals cannot have an Re value of 1.0 since they have a 100 percent failure rate over time.

Increasing system reliability
Experience in a wide range of industrial settings over several decades has demonstrated that installing bearing isolators (Fig. 3) on a population of rotating equipment will greatly increase system reliability.

The ability of the bearing isolator to retain lubricant and expel contaminants is certainly important, but the mere fact that components with a life expectancy have been replaced by components with no life- expectancy limitation cannot be discounted.

There are fewer failures when bearing isolators are used, not only because they are doing a better job of protecting the bearings, but also because the probability of a seal failure has been eliminated. (Since a seal failure necessarily causes bearing failure, many system failures are misdiagnosed as bearing failures when a seal failure is causal.) A bearing isolator’s value becomes more obvious when you recall from the life-cycle curve that we are only considering the useful life phase when calculating reliability. Non-contact, non-wearing bearing isolators also eliminate the wear-out and infant mortality phases of all finite-life products.

Finite-life lip or face contact seals easily can be damaged upon installation and, consequently, be dealt a shortened life expectancy. Unfortunately, damage or manufacturing defects also may not be readily apparent from visual observation during system assembly. A finite -life contact seal may have little life left after installation, which will place that device in the precarious infant mortality phase of the life-cycle curve.

Cold hard facts
Anything with a life expectancy can easily have that life shortened. There is much you can do to shorten the life of any device or component. Conversely, there is little you can do to make any device or component last beyond its life expectancy.

The best you can hope for is to try and keep the product out of the infant mortality life-cycle phase. All contact seals will fail. When is simply a matter of time and probability.

Failure analysis seminars are usually quite popular. (Interestingly, failure analysis manuals are often larger than application guides.) Yet, while failure analysis is important, the lessons learned will only help increase the reliability multipliers, not eliminate them, and perhaps reduce the number of devices falling into the infant mortality phase. To eliminate the probability of a seal failure, and thus a system failure, you would need to eliminate finite-life contact seals from the equation.

Granted, given a system’s design, non-contact, non-wearing bearing isolators may not be a viable option. There are instances where finite-life contact seals are the only option. In those cases, living with an increased number of reliability multipliers, and hence a lowered system reliability, becomes a necessity. In most cases, however, contact seals and their associated reliability multipliers should be eliminated wherever possible.

The bottom line is really quite simple: Want fewer failures? Install fewer things that fail.

Neil Hoehle is Chief Engineer for Inpro/Seal Company in Rock Island, IL. A graduate of Western Illinois University, he has spent the last 24 years working in the design and development of bearings, housings and seals. Telephone: (309) 787-4971; e-mail:

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Selecting Modern Reverse-Flow Filter-Separator Technology

Is this type of cost-effective technology right for your operations?

Each year, thousands of positive displacement compressors suffer serious damage because upstream filters or separators are really not doing their jobs as anticipated by the owner-purchaser. The reputations of machinery engineers are also at risk because they often neglect to understand the full impact of liquid and particulate entrainment in the gas. That said, engineers would do well to study the merits of reverse-filter technology.

Reverse-flow filter-separator technology is a profit generator for best-of-class refineries and petrochemical plants. First applied in the mid 1970s, these flow-optimized, self-cleaning coalescers (SCCs) represent mature, low life-cycle-cost, best-technology solutions for reliability-focused users. A reliability-focused user is far more interested in low life-cycle costs than lowest possible purchase price.

However, since aggressive marketers are known to have clouded the issue with advertising claims, a thorough examination and explanation of facts and underlying principles is in order.

Conventional filter-separators vs. SCCs
To understand how SCCs work, we first must recall how most conventional filter-separators (CFSs) function. In the CFS shown in Fig. 1, the gas enters the first-stage filter elements where its velocity is reduced as it passes through a large filter element area. Initially, the various and sundry contaminants (iron sulfides, etc.) are caught by the filter, but the gas forces gradually sluff it to a particle size that will pass through the filter elements.

The gas and solid particles, as well as the liquids coalesced on the inside of the filter element undergo re-acceleration and are being re-entrained in the collector tube before being led to the next separator section. With wire mesh or vanes in this section typically allowing passage of fine mist droplets and particles—let’s call them “globules” of liquid—in the below 3-8 micron size range, a good percentage of liquid and small solids (particulates) remain entrained in the gas stream leaving the CFS.

In contrast, self-cleaning coalescers or SCCs (Fig. 2) vastly reduce this entrainment and send much cleaner gas to the downstream equipment.

However, SCCs do not accomplish this task by merely making the inlet into an outlet, changing the outlet to the inlet, and calling the “new” device a reverse flow unit. Instead, consideration had to be given to internal configuration, flow pattern and—most importantly—the characteristics of both the liquids and solids to be removed. The designers of this equipment had to adjust their thinking from only pressure-drop concerns to considerations dealing with liquid specific gravities, liquid surface tensions, viscosities and re-entrainment velocities.

In properly designed SCCs, gas first passes through the plenum, then through collection tubes and to the filter elements. The front-end of an SCC represents a slug-free liquid knockout. The de-entrainment section is sized to reduce the gas velocity so as to allow any particulates that might have made it through the filter to either drop out or attach themselves to the coalesced liquid droplets that fall out at this stage. Over three decades of solid experience have proven the effectiveness of this design. Essentially all entrained particulates and mist globules are removed, as are free liquids and large agglomerated materials.

Removal efficiencies examined
Some CFS configurations and models are claiming removal efficiencies with their so-called coalescers that are much better than those actually achieved. These claims are often made for vessels that are much smaller than the well-proven SCCs, and they are virtually impossible to achieve by single-stage CFS models. In addition, these CFS designs are vertically-oriented and their manufacturers or vendors sometimes state—incorrectly—that effective coalescing cannot be achieved in a horizontal vessel.

Upon closer examination, one may find certain CFS configurations to have high pressure drops with “moist” gases, or high velocities, shorter filter elements, virtually never any slug-handling capacity. Moreover, unless a vendor or manufacturer uses the High Efficiency Particular Air (HEPA) filters mandated for use in nuclear facilities and required in hospital operating rooms, filtration effectiveness down to 0.3 micron—considerably less than one hundredth of the width of a human hai—is simply not achievable.

Filter quality examined
Keep in mind that a conventional forward-flow filter separator is considered to be a “coalescer.” It incorporates filter elements that operate on the coalescing principle. The filter elements coalesce liquid droplets into 10-and-larger micron size globules to be removed by the downstream impingement vane mist extractor (vanes are guaranteed to remove 8 to 10 micron particles). It is not reasonable to use simple piping insulation as a filter medium and guarantee the removal of droplets in the 0.3 micron size range. Multi-stage configurations are needed and the ultimate filter has to be “HEPA-like,” i.e. it has to far exceed the quality of piping insulation.

A good design typically embodies long fiberglass filter elements using certain micro-fiber enhancements that are known to modern textile manufacturers. Low-velocity technology is very helpful and surface area is not as important as the depth of the media through which the gas has to pass.

The thicker the filter element, the longer the gas takes to pass through it, resulting in more and better coalescing of the liquids.

Some SCCs are offered with thin, high-pressure-drop, pleated-paper elements, representing very low contact times and high-exit (re-entrainment) velocities. As dirt builds up, exit velocities rise even higher, resulting in more and more re-entrainment of liquid mists and any associated, shearable solids exiting the cartridges. And the game goes on, as the re-entrained particles get smaller and smaller, thus meeting an artificial guarantee as velocities become higher and higher.

Others offer high-density and -depth media fibers that result in high pressure dropand high exit velocity, and which also re-entrain immediately after passing through the cartridges. Both of these approaches, as well as the downsizing of vessels and internals, contribute to marketing strategies geared to high consumption of elements and, thus, high sales volume and profitability for the vendor.

A competent SCC manufacturer’s approach should be just the opposite—to give the user/purchaser maximized reliability, maximized cartridge life and lowest possible maintenance expenses. Years ago, the concept of “self-cleaning” vessels was successfully transferred from oil-bath separator scrubbers. They are still offered for specific applications and incorporate rotating cleanable bundles. This technology evolved to filter vessels with a rotating cleaning mechanism and to the present state-of-art, i.e. the back-flushing of individual elements while remaining on-stream.

Further, competent manufacturers still offer maximized performance from even conventional vessels by utilizing tried and true designs with maximized internals. They will not advocate the use of downsized versions that violate certain velocity and pressure-drop criteria, thereby incurring high maintenance and non-sustainable, or non-optimized performance.

This takes us back to HEPA filters. Designed and developed for air filtration, HEPA filters recycle the air many times within a closed system and periodically add fresh makeup air to achieve the desired air quality. In the hydrocarbon processing industry, there is usually only a single-pass opportunity to achieve clean gas. It is rarely feasible to recycle process gases several times to obtain the desired gas purity. Since absolute, beta-rated filter elements are simply not able to achieve these results, many inferior designs call for one or more “conditioning” filters, or vessels, to be placed upstream of their “coalescer.”

Also, be on the lookout for offers that allude to the advisability (or just the merits) of installing downstream vessels to clean up certain liquid streams to which the gas has been exposed. A relevant question to ask is “Why does the liquid have to be cleaned up if the upstream vessel(s) has done its job of, say, protecting the treating tower?” Without fail, the answer will point to liquids, or mists or corrosion products in the form of small solids particles that were not adequately removed upstream of the tower. Hence, foaming and treating agent contamination were not eliminated. This means tower upsets, additional filtration for liquids and even the possible need for carbon beds or filters to remove trace liquid aerosol contaminants.

SCCs have been successfully implemented to protect such process streams and to eliminate or prevent contamination-related upsets. Time and again, bottom-line results show that self-cleaning coalescers protect equipment and safeguard reliability.

How to specify and select the best equipment
Superior self-cleaning coalescers can remove iron sulfides, viscous fluids and slugs because of their inherent low pressure drops (4″ to 6″, or 100 to 150 mm H2O). Moreover, low velocities and other important considerations conducive to good separation and low life cycle costs must be taken into account here.

With input from the user or destination plant, a competent vendor can assist in drawing up a good inquiry specification. Within the specification there are many options to consider. The choice, quite clearly, depends on process conditions and related parameters, some of which are as follows:

  • Dry filter: for gas with associated solids
  • Dry filter, self-cleaning: for gas associated with solids
  • Line separator: for gas containing entrained liquid mist
  • Vertical or horizontal separator: for gas with entrained liquid globules (mist, aerosol)
  • Vertical or horizontal separator: gas with entrained liquid particles (mist) and free liquid (slug) removal
  • Vertical or horizontal filter-separator: gas with entrained liquid globules (mist, aerosol) and stable solids
  • Reverse-flow mist coalescer: gas with entrained liquid globules (mist, aerosol). Removal to sub-micron particle size and extremely high efficiency
  • Reverse-flow mist coalescer with slug chamber: gas with entrained liquid globules (mist, aerosol), slugs and (stable or unstable) solids. Removal to sub-micron or better, at high efficiency (can be furnished in self-cleaning configuration while in full service)
  • Oil-bath separator-scrubber: gas with liquid globules (mist) and solids (stable or unstable). Removal to 3 microns at 97% efficiency by weight
  • Tricon 3-stage separator: gas with entrained liquid globules (mist), slugs and solids (stable or unstable). Removal to 3 microns at 97% efficiency Evaluating the proposed configurations

Once the various bidders submit their offers, they must be evaluated using life-cycle costing and suitability criteria. An objective evaluation must keep in mind the following:

1. Velocity: Once the gas stream enters the vessel, there should be no internal configuration that would accelerate the gas back to pipeline velocity. Causing the motion of gas to increase in velocity will only cause the liquid to shear into smaller and smaller globules.

2. Pressure Drop: In no instance should a piece of separation equipment be designed with more than a 2 psi pressure drop from flange to flange when the vessel operating pressure exceeds 500 psig. At less than 500 psig, the flange-to-flange pressure drop should be limited to one psi or lower. Pressure drop consumes energy, and energy costs money.

In no design of separation equipment should the pressure drop across an element arrangement be allowed to exceed 0.5 psi. As filter elements become wetted and 50 percent plugged, the pressure drop increases four-fold.

If, for example, the initial pressure drop is 0.5 psi, and the elements become half- plugged, the pressure will increase to 2 psi. Once the elements become three-quarters plugged, the pressure will increase to 8 psi. This is 16 times the initial pressure drop and a change of elements is now unavoidable. Keep the initial filter element pressure as far below 0.5 psi as possible to avoid frequent element change-out. Remember, the filter elements have to be disposed of and this disposal can become expensive.

3. Filter Element Cost: Always ascertain the cost of replacement elements. Some vendors will practically give away vessels in order to generate spare parts sales. Find out the inside diameter, the outside diameter and the length of the proposed elements and how many of these make up the vessel internals. Using this information, calculate the surface area on the inside of the elements and the velocity of the gas entering the elements.

Additionally, from this information, determine the exit velocity leaving the elements. Note that this velocity should not exceed the re-entrainment velocity of the liquid. Some of the reverse-flow coalescer offers you might receive will turn out to be “egg beaters” that take whatever liquid enters the vessel and shear it into orders-of-magnitude amounts of smaller globules that are then re-entrained in the gas stream. Liquid globules can be sheared so small that they cannot fall out again until they re-coalesce downstream. But, all the same, the liquid is there to do its damage to downstream equipment.

4. Vessel Life: Under ordinary circumstances, separation equipment should have a useful life of 20-25 years. Needless to say, corrosion problems, internal explosions, vibration or pulsation, overloads, hydrate formation, lack of routine maintenance, incorrect or faulty maintenance practices, misapplication or use of equipment under unsuitable operating conditions, re-placing elements with unsuitable or poor-quality substitutes and various other forms of mistreatment can adversely affect vessel life.

5. Reliability of Vendor: If a piece of separation equipment is bought and put into service under conditions that deviate from the design intent, it may not live up to expectations. Such underperformance will usually manifest itself rather quickly. These unpleasant surprises can be avoided by selecting a reliable vendor as the source of supply. The individual or team engaged in the selection and evaluation task should ask:

  • if the vendor has the facilities to manufacture the equipment, or “farms it out” to sub-vendors
  • who builds the essential parts such as the filter elements, the mist extractors, other internals, and the vessel itself
  • who does the x-raying, hardness testing, ultrasonic examination, magnaflux examination (both wet and dry, if required), stress relieving, hydrostatic testing, grit-blasting and painting, and final preparation for shipping

6. Value: How important is proper performance of the separation equipment to the protection of downstream equipment? Certainly, monetary value has to be placed on repair and maintenance of the downstream installation.

To what extent would rotating equipment such as turbines, turbo-expanders, centrifugal or reciprocating compressors, internal combustion engines, dehydration, amine or molecular sieve units, refinery or petrochemical processes, meter runs, power plants, fired heaters, plant fuel, municipal fuel and/or, perhaps, gas coming in from producing wells be affected by potential performance deficiencies of the separation equipment?

What are prudent downtime risks and what would be the cost of rectifying problems with downstream equipment caused by defective filtration equipment?

A reliability-focused organization demands answers to these questions!

7. Follow-up: Who will ultimately make the determination if the goods specified and purchased are, in fact, the goods received? Will the responsibility change hands from selection to purchasing to operation with a relaxed regard for what was intended to happen and what is actually happening? In that case, only the very best and most conservatively-designed piece of separation equipment should be purchased.

Contrary to “conventional wisdom,” there have been no “super breakthroughs” in the design of separation equipment in the past 30 years. On the other hand, considerable changes have been made in presentation and marketing methods over the past two or three decades. Some marketing claims as to how far the state-of-the-art has advanced during the past several years (or even in recent months) are truly stretching the imagination. Beware, since they may simply be designed to sell spare parts and/or just stay alive in a highly competitive environment.

Life cycle cost calculations
Life cycle cost (LCC) calculations also must be used to determine the wisest equipment choice. Life-cycle-based filter equipment cost is the total lifetime cost to purchase, install, operate and maintain (including associated downtime), plus the downstream cost due to contamination from inadequately-processed fluids or even the risk of damaging downstream equipment,and (finally) the cost of ultimately disposing of a piece of equipment.

A simplified mathematical expression could be:

LCC = Cic + Cin + Ce + Co + Cm + Cdt + Cde + Cenv + Cd

LCC = Life Cycle Cost
Cic = Initial cost, purchase price (system, pipe, auxiliary services)
Cin = Installation and commissioning cost
Ce = Energy consumed by incremental (i.e., higher) pressure drop across the equipment offered
Co = Operation costs, if applicable
Cm = Maintenance and repair costs
Cdt = Downtime costs
Cde = Incremental repair cost, downstream equipment
Cenv = Environmental costs
Cd = Decommissioning and/or disposal costs


Energy, maintenance and downtime costs depend on the selection and design of the filtration equipment, the system design and integration with the downstream equipment, the design of the installation and the way the system is operated. Carefully matching the equipment with the process unit’s or production facility’s requirements can ensure the lowest energy and maintenance costs and yield maximum equipment life.

When used as a comparison tool between possible design or overhaul alternatives, the life-cycle-cost process will show the most cost-effective solution, within limits of the available data.

Concluding thoughts
Initial investment costs go well beyond the initial purchase price for your equipment. Investment costs also include engineering, bid process (“bid conditioning”), purchase order administration, testing, inspection, spare parts inventory, training and auxiliary equipment. The purchase price of filtration equipment is typically less than 15 percent of the total ownership cost. Installation and commissioning costs include the foundations, grouting, connecting of process piping, connecting of electrical or instrument wiring and (if provided) connecting of auxiliary systems.

But, suppose now that a team of engineers goes through the planning, the bidding, the procurement, the installation and evaluation stages of the separation equipment and finds that it matches the requirements exactly. Then comes the spare-parts purchasing stage and, at that point, cheap, incompatible sets of fiberglass pipe insulation elements are bought. Suppose further that these are to be installed, when dictated, by the best operating practice assigned to the installation.

Chances are the element manufacturer will have made all kinds of promises and a few dollars will have been saved. However, what happens when these substitutes are installed? There is noquestion about it—the separation equipment can no longer live up to the job specifications and bad things start to happen at that point.

So then, to the reliability-focused and risk-averse user, life cycle costs are of immense importance. In contrast, repair-focused users are primarily interested in the initial purchase price. There is consensus among best-in-class industrial and process plants that only the truly reliability-focused facilities will be profitable a few years from now, and only they will survive.

A frequent contributor to Maintenance Technology, Heinz Bloch is the author of 14 comprehensive textbooks and more than 300 other publications on machinery reliability and lubrication. He can be contacted at; Internet

Continue Reading →


6:00 am
October 1, 2005
Print Friendly

Leading The Safety Process

What corporations can do to increase safe work practices.

For the last seven years, I have been working in the electric utility construction industry as a Regional Safety Manager. During this period, I have had the misfortune of investigating many serious accidents, ranging from amputations to fatalities. A common thread running through all of these cases has been the fact that a shortcut was taken by one or more employees and a critical procedure was not followed.

Workplace culture as a driver
A line worker was in an aerial lift, working on a new overhead power line that was being installed along a rural gravel road. He was approaching an existing single-phase, 7200-volt overhead power line with a grounded AWG #2 triplex service drop cable in the boom’s jib. He was going to connect a service drop to a transformer.

This young employee had moved the transformer earlier from an old 12.2-meter-tall pole to a new pole that was 13.7 meters tall. He was not wearing his rubber insulating gloves (which were still in the glove bag, hanging from the tool board in the bucket). Furthermore, he also had not placed the rubber insulating line hose (which was also in the bucket with him) on the energized phase conductor. No one was observing him while he was working.

He maneuvered the aerial lift bucket between the phase and neutral conductors on the existing power line, with the bucket at a 45-degree angle to the boom. The end of the triplex cable was inside the bucket with him. The existing line was located parallel to and closer to the gravel road than the new line being installed. The employee apparently contacted the phase conductor and was electrocuted. His supervisor found him slumped down in his bucket.

The OSHA data base is full of accident descriptions very similar to this one. They all have a couple of things in common:

  • failure to follow proper procedure is a part of every accident listed; and
  • experience and/
  • or training, in most cases, do not appear to be an issue.

There is a widespread misconception that many accidents occur simply because an employee is not following the rules, and that most injuries are the fault of the individual. That is not the case, however, as it’s the cultures of our workplaces that drive everything we do.

Just recently, a senior lineman was killed when he was flung from an aerial lift after the bucket from which he was working had been caught under a tree branch. Sadly, he had not been wearing his fall-protection harness, which was required by company policy. In fact, he had been warned on several occasions about violating that policy.

While it’s true that this tragedy resulted because a worker chose to violate a company policy, it was the company’s safety culture that created the environment for him to make that decision. Had the proper safety culture existed, and the correct disciplinary action been taken when the initial violations were observed, this accident most likely would not have occurred.

Creating the proper culture
I believe that any company can achieve the goal of zero accidents. One of the first steps in the process is that you must treat safety as a core business value. If you approach safety as a process, or just another program, you will fail to motivate employees to incorporate it into their daily activities. If you make safety a core business value, it will become woven into everything you do, and every decision you make.

Sometimes, companies are lured into a false sense of security because they haven’t had an injury in a year. They may think that they are doing everything right. The reality, though, is that they have just been lucky. Only when an employee’s behaviors are constantly safe can you consider that you have successfully integrated safe work practices into your corporate culture.

Executive decisions
Many company presidents and CEOs across the country think that they are taking the correct steps toward improving their safety integration process by hiring a qualified safety professional, providing them with adequate financial resources and then telling everyone in a company memo that ÒSafety is Number One.Ó Yet, these same executives find themselves frustrated year after year when the company continues to experience accidents and they are unable to reduce their injury rates. And why not? As popular author Stephen Covey tells us: “If we always do what we’ve always done, we’ll always get what we’ve always got.”

Company executives who are frustrated over the inability to reduce injury rates within their organizations must someday come to the realization that they bear the ultimate responsibility for promoting a safety culture. They and their line management team must take 100-percent responsibility for integrating the safety process into their workforce. To accomplish this they must:

  • Make it very clear that safety truly is the company’s number one core value.
  • Believe that a zero accident/injury workplace is possible.
  • Set expectations for those who report to them and hold them accountable to those expectations with consequences for non-compliance.
  • Accept no excuses if things go wrong and non-compliance is a factor.
  • Address the issue immediately.
    And, most importantly:
  • Model the safe behavior they expect of their employees.

There is no question that organizations with the greatest success rates at preventing accidents depend on line organization involvement in the safety process. But those in the line organization need support from the corporate leadership, as well as access to resources with the technical expertise to advise them and provide informed guidance for the overall safety program.

Companies that have achieved the greatest success at maintaining safety in the workplace do so by reviewing all of the elements of the safety process. You cannot just focus your efforts in one area, such as tightening discipline in a system that is out of control. It is only when all parts of the safety process are recognized and worked on that a successfully functioning safety culture can be realized.

Proper training
The importance of health and safety training in the workplace should never be underestimated. It is the key to success in managing safety in the work environment.

Proper safety performance in the workplace rests in the education and training of a company’s greatest resource, their employees. The employees’ acceptance and participation in a safety culture requires sufficient knowledge and understanding of the hazards that they may encounter in the performance of their duties as an employee.

Companies that excel at promoting a safety culture have developed a comprehensive safety education system that includes budgeting regular, on-going employee, supervisor and project manager education and toolbox or task training. The positive returns on the training investment come in the form of improved safety performance, with the added benefit of a greater degree of competency and efficiency in task performance.

In order to have a successful health and safety education program, it must be considered as a regular part of the budget. The impact of the inclusion of safety training as a line item within the budget clearly demonstrates management commitment and promotes employee involvement.

Mike Bahr is Electrical Safety Program Manager with National Technology Transfer, Inc. (NTT). A 20-year veteran of the industry, he is a certified and published subject matter expert in the fields of electrical safety and regulatory compliance. Telephone: (800) 363-7758 x 348; e-mail:; Internet:

Continue Reading →