Archive | July, 2005


1:31 am
July 2, 2005
Print Friendly

So you think you have the right Maintenance Program?

Many organizations feel that if they can move from reactive to preventive maintenance they are headed on the right path, and that this in turn means they are on the road to proactive maintenance. However, as has been demonstrated by many studies, in excess of 80 percent of all failures are random. Therefore, time-based preventive maintenance (PM) programs will not be effective because you will, in most cases, be doing too much work too soon or too little work too late.

Over the past few decades, the demands placed on maintenance have been changing. In leading companies today maintenance is no longer viewed as a cost center, but rather is expected to contribute to the strategic goals of the company. In today’s ongoing environment of wanting maintenance to do more with less, it becomes ever more critical for the maintenance organization to define as accurately as possible the right work to be doing at the right time.

In world class organizations, the maintenance program breakdown is benchmarked as follows: Too little too late—15 percent; too much too soon—5 percent; the right work at the right time—80 percent.

However, the typical industry averages today are: Too little too late—60 percent; too much too soon—20 percent; the right work at the right time—20 percent.

Too little too late: Deviation Work. This type of environment is largely reactive as a result of financial and personnel cutbacks. Consequently, work is not being completed as planned. Emergencies take place regularly and much of Maintainers’ and Operators’ time is spent in a fire-fighting mode repairing failed equipment.

Too much too soon: Nonvalue-Added. This type of situation results when too many nonvalue-added tasks are performed and a majority of workers’ time is spent on jobs that are not well timed or essential to the operation of the plant. Every structured maintenance program includes some degree of PM; however, most failure is random.

When equipment is in a stable state, performing time-based maintenance tasks risks disrupting a well-running, stable system and can actually create unanticipated problems. Moreover, PM is costly and does not ensure you will detect the cause of a problem prior to it occurring, thus not prevent failure from taking place.

The right work at the right time: Base Work. The proper process is in place to identify the appropriate maintenance program using a combination of work identification methodologies. The goal is to identify and mitigate problems before they arise.

So where are you in the overall scheme of things?

To help you critique where you want to be, it is important to understand that in a world class program you are going to have some preventive, predictive, and run-to-failure (reactive) maintenance tasks.

Methodologies to help you understand the optimal maintenance program for your assets begin with work identification methodologies such as maintenance task analysis (MTA) and reliability centered maintenance (RCM), which force the participants to understand the operating context of the equipment and then to define what the most appropriate maintenance program should be. Prior to starting work identification, it is very important to go through a detailed criticality and risk assessment to determine the 15-20 percent of assets that require the greatest attention in the early stages of the program.

The balance should eventually, at a minimum, have MTA applied to those assets that do not require RCM. The rationale behind using two complementary work identification methodologies is for practical reasons—typically RCM is more resource intensive, hence more lengthy and costly, and therefore applied only to the assets/ equipment that are of highest consequence, criticality, and risk.

MTA is an extremely cost effective and practical way to develop reliability programs for equipment. With MTA, you can build and implement basic, technically-sound reliability programs for all plant assets in the shortest timeframe possible.

In today’s world it is essential that we strive to optimize every maintenance dollar our organizations spend. Thus we all have a significant responsibility to insure that the best maintenance programs possible are put in place. It should be recognized also that this is an ongoing activity, not one that will be completed a week from Monday. With this in mind, we all need to appreciate that the results will be realized incrementally over time. Enjoy the journey! MT
Continue Reading →


1:28 am
July 2, 2005
Print Friendly

Who’s driving?


Bob Baldwin, Editor

During a recent visit by an ERP/EAM supplier to our office, the discussion turned to the types of information-rich reports that can be served up and the feature-rich screen interfaces to queries on the databases that are features of modern integrated systems. It was noted that company information networks can now link the plant floor to the executive suite. Of course, the dashboard metaphor was used to when referring to the screen display.

The mention of the term dashboard serves up from my memory banks a number of images, primarily the dashboard of an automobile, often the one on the TR3 Roadster I owned 50 years ago.

What does the super rich guy look for on the dashboard of his car? I can’t say, but I tend to picture him looking more at the hand-finished walnut paneling than the instruments. Further, I envision him never sitting behind the wheel, but in the back seat reading the newspaper.

Although my TR3 didn’t go all that fast, it was made to be driven. The dashboard layout was designed to serve up the most important information for the driver. The speedometer and tachometer were big and placed in front of the steering wheel for easy viewing. Gauges for fuel level, water temperature, oil pressure, and electrical charge were smaller and placed in the center console.

What about a dashboard for the auto mechanic? Today, it is a part of the diagnostic instrumentation that hooks up to the on-board computer. Drivers never see it because it only works when the car is in the shop. It’s not all that different from some of the predictive maintenance software used in an industrial plant.

When you buy a car, you get the dashboard the design team decided was best for helping to sell the car. The same goes for the dashboard you see during the software demo. Fortunately, the dashboards on software packages are configurable.

The real trick is proper configuration of the dashboard. What performance indicators do you need? Can they be served up from the data you are able to collect? To continue with the automobile metaphor, you must decide where are you going and how fast you want to get there? What are you trying to do, win an off-road rally, qualify for the pole position at Indianapolis, or survive a demolition derby? How serious are you about winning? What kind of backing do you have? MT


Continue Reading →


6:00 am
July 1, 2005
Print Friendly

A Personal History of the Art and Science of Reliability


Charles J. Latino, Founder, Reliability Center. This article is based on remarks delivered by the author on May 24, 2005, at the Maintenance & Reliability Technology Summit (MARTS) conference in Rosemont, IL. At the concluding Summit Session, Latino received the Summit Award in recognition of his lifetime of outstanding contributions to the maintenance and reliability professions.

Lessons learned during a five-decade journey toward excellence in reliability and maintenance.

One of the advantages of having someone as old as me (75) on the podium is that I can give you a living history lesson about the art and perhaps the science of reliability and its relationship to maintenance. Because I have seen events unfold over the years, I can step back and see trends that others may not have had an opportunity to see. Perhaps my overview can be projected into the future, a future that you may shape and will certainly encounter.

My first job out of college was in a relatively small chemical plant. I was trained as a chemical engineer but very early I was made a maintenance shop supervisor. What an experience that was. Maintenance doesn’t really describe what we did; it needed adjectives. The entire job of my group was to fix machines, pipes, and anything else that broke. So, expensive breakdown maintenance is a more appropriate term for what we were instructed to do.

Since plant management was clearly focused on producing product, there was no tolerance for taking time to find out why anything broke. So after a couple of years of this silliness, I joined the U.S. Army as a commissioned officer in the health area. The contrast was like a breath of fresh air because the military provides wonderful training and gives young people a lot of responsibility. I became the sanitary engineer on a military post in Virginia. I worked out of a hospital and was also in charge of preventive medicine.

The philosophy of prevention
Well, before you get too excited about the lessons I probably learned, let me tell you what we did. We immunized babies and adults going overseas. The idea was to prevent illnesses. One of the things that I did was raising flies. That’s right, I was raising flies and sending them in hard shell pupae stage to 2nd Army Headquarters where they were allowed to develop into adults and were exposed to different insecticides so that the army could tell me which insecticide would be most effective to use during the summer months. I did a number of things like this that were truly preventive.

In the Army, I learned that training could be very effective when new things were learned and then applied in realistic circumstances. I also learned that young professionals should be assigned to areas where their education and natural inclinations lean. Finally, I learned that if we are creative enough, we could prevent bad things from happening.

After I was discharged into the reserves, I returned to my previous company, but to a recently built facility in Virginia. Again, despite my training as chemical engineer, I was placed in maintenance and I soon graduated to an authority position. To me that meant I had some space to try my own ideas.

Trying new ideas
In 1957, I purchased a rather heavy electronic box used to analyze equipment vibrations. Using this box, I was able to identify equipment problems before they caused us downtime. This is certainly old hat to you, but in the 1950s it was a revelation. The early warning systems allowed us to prepare for the fix in a way that considerably reduced downtime and sometimes eliminated the need for it.

During this time, I researched other nondestructive tools as they became available. With sonic equipment, we were gauging the thickness of pipes and tubes and with infrared thermography we were identifying furnace and heater problems as well as the condition of our roofing systems. This was great fun. And since process uptime, and consequently revenues, were rising, management was supporting my efforts—although I had to constantly remind them what our contribution was. As you know, when something good is happening there are always many people standing in line to take credit for it.

Using assets wisely
In early 1960 I was sweet-talked into a temporary move to another plant where I entered a new world. This was my employer’s largest facility. It had one batch and three continuous polymer facilities and a very large operation to produce various synthetic fibers used in products such as tires, carpets, and drive and conveyor belts.

My first assignment was to build a facility to make dies to extrude the polymers. It was my job to see to the building of the facility, the technology transfer from Italy for the manufacturing facility, and the hiring and training of people to do the work. Remember, I was a trained chemical engineer and although the assignment was interesting it was strictly mechanical.

The holes in these dies were very small, some so small they could not be seen with the naked eye. And to further complicate the project, most of the holes were not round, but Y-shaped and dumbbell-shaped. And each of the holes in a die had to be exactly like the other 50 to 100 other holes in that die and the holes in other similar dies.

We needed to acquire tools to make these holes but nobody sold tools that small at that time so we manufactured them on jeweler’s lathes under microscopes. Each tool took an hour to make and held up for less than 10 holes, yet we had orders to produce thousands of holes.

I used two of my assets to correct this problem. One was an engineer who had an insatiable curiosity and the other was mechanical design genius. I developed specifications for what we needed and gave it to the engineer and told him to find a machine that would make the tools in no more than 3 seconds. He spent a couple of months traveling throughout western Europe and finally found something close to what we needed in Switzerland.

He brought the machine back with him and my in-house mechanical design genius modified it to fit our specifications and we began making each tool in 3 seconds.

As I stated before, I learned that I would get the best results from technical resources if they were used in areas that they were most interested in.

Leveraging information technology
Soon after this I was promoted and became the head of engineering, maintenance, and utilities for that facility, where there were 20 producing cost centers. Each one had a supervisor who in turn had a cadre of production workers and a small group of maintenance workers whose job it was to quickly fix problems on the run when they were called upon. This type of maintenance service was pretty common in the fiber industry at that time. This was not a very efficient use of labor because when the equipment was running smoothly, the labor was idle.

I suggested that we develop software (remember this was the early 1970s) where production workers could request these jobs on handy workstations. The work requests would immediately go to a centralized computer that would prioritize the requests and direct a mechanic who had the needed skills to address the job. Since the computer would know where each of the maintenance craftsmen was working, this would be an efficient application of manpower.

The facility had more than 3000 employees who had to be trained to input information needed into the input stations. So if an operator found that yarn was breaking and wrapping on a position on one of her stations, she would input that information to the computer.

The computer would be receiving requests from the entire facility and prioritizing them so that the jobs that would provide the greatest safety and financial return would be done first.

Since the operator’s request also identified the skill needed, the computer would select a person with the required skills and the shortest travel time to perform the task.

When the mechanic arrived at the position needing maintenance, he signed in to the input device. After completing the task, an accounting of materials used would also be input and the computer would automatically see that the supplies were replenished. Once the task was completed, the operator would report to the computer the time when the position was restarted and production resumed.

Now we had captured the exact time that machine or machine position was down, the time it took for the mechanic to arrive, the elapsed time of the repair, and the lost time, if any, to restart the machine or the problem position. And this was done with 1970 technology.

Boldness opens dialogue
At the time I requested the computerized process, it was a bold and perhaps seemingly outrageous move. I have found that a really good strategy for large ideas is to wrap them in a bold and outrageous package. If you have developed a reputation for materially helping to improve output and lower costs, management is reluctant to turn you down. Boldness and outrageousness open dialogue while incremental improvements lose their luster very easily, if any luster was there in the first place.

Later in my tenure at this facility, I learned that if you wanted to reduce production costs you must produce larger packages. So I recommended to my engineers that perhaps we could spin a package that was 60 lb, double the present size. We did it and then transported these larger packages and loaded or creeled them on to drawtwisting machines.

Each drawtwisting machine was producing 72 5-lb packages at the time. I wanted the biggest package we could produce. My mechanical design genius said that he believed he could design a 20-lb package. We also decided that instead of hauling the 20-lb bobbins to the next operation, we would devise a conveying system that would become the creel for the next operations.

This was a bold move. The vice president wanted to support it because of its tremendous savings and his confidence in our abilities, but at the last minute he got cold feet and purchased thousands of 10-lb bobbins because his advisers were telling him that it would be impossible to develop a 20-lb bobbin that would not crush under the forces created by the nylon yarn winding up on the flanged spools. Because of this design concern, management decided to take the bobbin design away from us and give to the central engineering department designers who had more experience with bobbin design. The vice president purchased the 10-lb bobbins because of the failure of their designs.

I asked my designer if he could make a 20-lb prototype bobbin that would work and not crush. His first prototype performed as we intended. This was a severe blow to central engineering management.

I believe that the central engineering bobbin designers had developed paradigms of what would work and what would not work. My designer was a mechanical expert who had not designed a bobbin before so he had no built-in restraints.

Paradigms are extremely powerful. Although they can sometimes provide order, they can also provide obstacles.

Failure analysis pays off

While all this design work was going on, I had grown my maintenance engineering staff to about 12 engineers of different disciplines. This group of fine professionals entered into the world of failure analysis among other things. First, they gathered information about methods and techniques that were being used in the aircraft industry. These techniques used probabilities to forecast failure events. But I thought, why do that when we had plenty of actual failures to study and we could generate real information. As a result, my young engineers and I began to develop methodologies of our own.


Applying all the methods of reliability engineering that we assembled and developed resulted in our plant polymer and fiber processes operating at a very high level of on-stream time. For example, our four-polymer processes were on stream an average of 98 percent of the time for the 10 years that we kept records. Many people manipulate uptime figures by excluding times they feel that they were not responsible for or by other ways, but my figures always accounted for every incident of downtime and for every hour of the year.

This was a remarkable achievement but I learned one or two important human concepts:

1. It is difficult for most people to accept large ideas and big accomplishments. It is like throwing a $10,000 bill on the ground of a busy street. People see it but no one picks it up because they cannot accept that it is real.

2. When large achievements are made, only really great managers and executives will acknowledge the originator. Some are afraid to give that much attention to someone else. Many others are reluctant to acknowledge the originator of the idea because they are afraid of alienating the support group that worked on the accomplishment.

Well, because of my track record the corporation decided to move me to their R&D operations at their central headquarters to continue the development of this new reliability engineering technology and spread it to the rest of the corporation. I refused to move on the basis that the company had three producing facilities where I was and that was a better laboratory than the pristine facilities at the home office. They bought my argument.

In 1972, the Reliability Center was established to further develop reliability concepts and to spread these concepts to the entire corporation. I directed and managed this operation. In the years that followed, we continued our development of reliability techniques and we consulted and introduced our methods into most of the company’s chemical plants in the United States.

More lessons learned
This is some of what I learned in those years:

• Challenge parochial pride to develop innovative approaches to improve performance. One way to do it is to suggest that in lieu of their application outsiders will be brought in to help.

• Managers often lean on a confidant who usually has his own agenda. If performance is improving, the confidant may be needed as a competent sounding board for the manager. If performance has been deteriorating, it may be that the manager is getting poor advice.

• Everyone has an agenda. His agenda may or may not conflict with the goals of management. Steady improvement in facility performance is a good indicator that individual agendas support the managers’ goals. Steady deterioration in performance is a clear marker that they do not.

• Some employees like to know the rules and are quite content to follow them. Others need space and responsibility with accountability to perform. Take a lesson from the armed services and provide excellent and realistic training, responsibility with accountability, and great support. That is a formula for success, but only given to the people who need space.

Experience-Based lessons and guiding principles

• Assign subordinates to tasks that they have the training and inclination to
• Develop a reputation for outstanding performance; then use it as political
• Find ways to advertise the success of your group.
• Wrap big ideas in BOLD and OUTRAGEOUS packages to stimulate dialogue.
• Be willing to share the credit and be gracious about it.
• Be smart and strategize how to move people who are stuck in their own
• Make training as realistic as possible and reinforce it with job aids as
• Have confidence in your ability to learn; don’t be embarrassed by people
who assume you are not up to par.
• Think bigger than the immediate situation; always looking at what is possible.
• Analyze small repetitive failures down to their root causes. They are the key
to good performance.


I worked in the United States when maintenance was totally repair and replace or rebuilt activities. And it was very expensive. On reflection, it was probably a good thing because our inefficiencies provided employment for men and women who had returned from World War II.

As we moved into the predicted maintenance era and started to gain efficiencies, we were expanding our markets at home and abroad. As our population grew we used the evolving manpower to staff the new and expanded industries that were developed to meet the market demands.

Today, our maintenance efforts are developing toward prevention through proaction. You know I was using that word long before it made it into the dictionary.

Minor costs add up
In the 1950s, I studied all the routine jobs that our shift mechanics in the various crafts performed. I found a chain that drove a feeder that broke just about every shift and was routinely replaced. I found that very cheap sewer sampling pumps were being replaced routinely every week. I found a very large conveyor system that kept dropping material from its ore-carrying belt onto the emergency shutdown cords just about every hour and people had to be dispatched to restart the conveyer.

I found that these minor cost incidents occur in every operation. I have been in hundreds of manufacturing operations and I have seen minor mishaps like these that routinely cost a great deal of money. As we developed and honed root cause analysis, we found that these small occurrences made a major contribution to the much larger more expensive mishaps such as equipment wrecks, fires, explosions, and major process upsets.

I also began to realize that human beings are not very good at recognizing where our largest costs emanate from. When we have a large explosion we all can appreciate that the company will be hit with a rather large cost. But if we amortize that cost over 10 years it is very likely not our largest cost. What is our largest cost over that same 10-year period? It is those small mishaps that really do not cost much when they occur. But because their cost is so small, there is no driving force to remove the cause.

What is missing is our ability to recognize frequency. A minor incident that occurs every hour or every shift or every week amounts to really big money. A minor failure that costs $100 to correct but occurs on every 8-hr shift can cost more than $100,000 each year. If we learn to do root cause analysis and eliminate the causes, we will prevent the bigger mishaps from occurring. The smaller mishaps may not be directly related to the larger ones, but their elimination reduces the noise in our systems and builds in discipline in the way we do things.

Focus on root cause analysis
If I project what I have seen in my lifetime, I believe that the use of root cause analysis will intensify as industry, banking, healthcare, and government see its usefulness in bettering our society. I believe that the use of root cause analysis only to satisfy compliance to laws and/or standards will eventually get the bad name it deserves.

Further out, I believe that eventually predictive maintenance will yield to true root cause analysis and be displaced by it. I probably will not see it but it will come.

Charles J. Latino is president and CEO of Reliability Center, Inc., Hopewell, VA; (804) 458-0645

Continue Reading →


6:00 am
July 1, 2005
Print Friendly

Is There a Hydrogen Economy?

As I got back into the driver’s seat having just spent more than $50 to fill up my SUV, I heard an interesting NPR news story on the coming hydrogen economy that made me want to learn more about the viability of hydrogen as a legitimate alternative source of energy in the U.S. Although this column steps away from a direct connection to maintenance and reliability, I hope you will share my interest.

The oil economy has led to many political, environmental, economic, and other human-centered problems. With the rapid rise of oil prices and terrorism creating a new urgency to rid ourselves of foreign energy dependence, clean fuel alternatives now seem more real than ever. Technology and investment is moving at a pace for hydrogen fuel production and hydrogen-powered fuel cells to become economically viable within a decade.

On June 25, 2003, the U.S. and the European Union agreed to collaborate on the acceleration of the development of the hydrogen economy. The $1.2 billion Hydrogen Fuel Initiative, announced on January 28, 2003, envisions the transformation of the nation’s transportation fleet from a near-total reliance on petroleum to steadily increasing use of clean-burning hydrogen according to the White House.

Jeremy Rifkin, author of The Hydrogen Economy: The Creation of the World Wide Energy Web and the Redistribution of Power on Earth, reports that more than 1000 companies around the world are already racing to the hydrogen future; the speed-up in R&D and market introduction is reminiscent of the early days of the personal computer revolution and the emergence of the World Wide Web. One telling fact is that Shell Oil already has invested heavily in two hydrogen technology companies.

A recent study by PriceWaterhouse Coopers forecasted that in less than 18 years hydrogen technologies and related goods and services will exceed $1.7 trillion in worldwide sales.

Think-Hydrogen points out that without hydrogen’s special qualities there would be no life on this planet. Examples include:

• Every water molecule on Earth contains two atoms of hydrogen.
• Plants use sunlight and the hydrogen in water to produce the oxygen we breathe.
• Through the process of fusion, hydrogen causes the sun to shine.
• 9 percent of the human body is made up of hydrogen.
• Hydrogen, when used in a fuel cell car, produces clean water as a by-product and causes no pollution.
• Hydrogen is infinitely recyclable, being derived from water and, when used as an energy source, is converted back into water.

General Motors has invested hundreds of millions of dollars in fuel cell research with the ultimate goal of removing the auto from the environmental equation because it believes the automobile leads the way to the hydrogen economy and a truly sustainable future. As alternative technologies to the internal combustion engine evolve, GM is developing a portfolio of options, including hydrogen-powered vehicles.

Unlike a combustion engine, in which exploding gas pushes pistons, a fuel cell engine strips electrons from hydrogen and uses the resulting electrical current to power a motor. Then it combines the remaining hydrogen ions (protons) with oxygen to form water, the only by-product.

Maintenance and reliability professionals should get up to speed with the technologies surrounding hydrogen production and use. A great place start is the How Stuff Works site.

If you prefer a more hands’ on approach to learning, visit the Fuel Cell Store for actual demonstration products that will get you up to speed as a hydrogen economy pioneer in a hurry.

In addition to Rifkin’s book, one of the better books that should appeal to those with a technical nature is Tomorrow’s Energy: Hydrogen, Fuel Cells, and the Prospects for a Cleaner Planet by Peter Hoffmann and Tom Harkin. Search for a discounted copy.

Yes, Virginia, there is a hydrogen economy and it should be on your radar screen. Search Google to learn more about this exciting new area to lead the pack. When your company installs its first fuel cell, you will be far ahead of the learning curve.

How will things change for you and your company in a hydrogen economy? There may be no effect over the next few years; however, that does not mean you should not start to have an awareness of the impact of this important energy source. With a consistent effort and steady financing including tax incentives from government coupled with a continued trend toward high fossil fuel prices, the hydrogen economy could be upon us in less than 10 years. However longs it takes, I hope the world is ready to move beyond the age of fossil fuels and toward the hydrogen economy. Let the fuel cell revolution roll.

Terrence O’Hanlon, CMRP, is the publisher of He is the director of strategic alliances for the Society for Maintenance & Reliability Professionals (SMRP). He is also the event manager for PdM-2005, The Predictive Maintenance Technology Conference and Expo, on September 19-22, 2005 in Indianapolis, IN

Continue Reading →


6:00 am
July 1, 2005
Print Friendly

Compressor Configuration Supports Maintenance and Reliability Needs

Many manufacturing plants employ compressed air systems in one capacity or another, and, for the most part, these systems provide similar output. While all compressed air might be the same, the engineering of that air is not. The needs of a petrochemical plant are quite different from those of an automobile manufacturing plant. Application-specific engineered air addresses those different needs.

Engineered air describes compressed air tailored to meet specific industry needs—100 percent oil-free, particulate filtered, and reliable. It goes beyond the simple output of compressed air at a specified pressure pounds per square inch. Engineered air provides the right type of air for the right application. A good example of this concept put into practice is work done at a North American ethylene plant.

Customized configuration
Ethylene processing plants use compressed air systems in a number of ways. Air powers pneumatic tools used in general maintenance, and it is employed for other plant operations, such as blow-down and dry-out. Blow-down uses compressed air to clear pipes and vessels of debris and blockage, while dry-out uses compressed air to remove moisture from pipes prior to ethylene processing.

Ethylene plants also use compressed air combined with steam to remove the coke build-up in the cracking furnaces. Scheduled periodic removal of this solid build-up allows the overall ethylene processing operations to run more efficiently.

Recently, a world-scale North American ethylene plant was undergoing a major expansion project that required a specially designed compressor system to handle the additional requirements. With this major expansion underway, it was vital that this new system provide absolutely reliable engineered air.

In order to ensure the most dependable compressed air source possible, the compressor package included a custom-designed compressor with a high-tech control system, meeting requirements for durability and reliability, accessibility and maintainability, and high level of control (Fig. 1).

The base design for this customized system also had to meet stringent API 672 standards for packaged, integrally geared centrifugal air compressors for petroleum, chemical, and gas industry services.

Durability and reliability
A baseload ethylene plant is the lynchpin of any petrochemical manufacturing complex. Ethylene is the building block and feedstock for all the downstream feeder plants. A baseload plant is expected to run continuously, maximizing efficiencies and minimizing costs. Downstream plants totally rely on the plant to provide a continuous supply of feedstock—in this case ethylene.

This is one of the key reasons the compressor system had to be designed to be extremely durable. To ensure this durability, the customized air compressor features self-adjusting tilting pad journal bearings that can adapt to load changes, providing stability, as well as double-acting thrust bearings to accommodate all load conditions. In addition, the system features stainless steel impellers, resistant to corrosion and erosion.

All of the compressor systems had to run reliably also. An unplanned shutdown would have an adverse impact on the entire petrochemical complex manufacturing capabilities. The compressor was built with a redundant oil system, helping eliminate the possibility of system failure, allowing the ethylene plant to meet its availability targets (Fig. 2).

Within an air compressor, the oil pump is the heart of the lubrication system; it is what keeps the machine running smoothly. If the pump breaks down, the machine comes to a grinding halt. To avoid this, we included not one, but two full capacity, full pressure pumps in the design—one motor-driven auxiliary and the other a shaft-driven main.

During regular operations, the shaft-driven main is operating, while the motor-driven auxiliary is on perpetual standby for emergency situations, providing additional overall package protection. Without this redundant system, in the event of an oil system malfunction, the entire compressor system needs to be shut down. The redundant system eliminates downtime and provides a reliable source of engineered air.

Accessibility and maintainability
In order to keep the plant’s compressor system running efficiently and reliably, it was essential to design the unit for maximum accessibility and maintainability. While requiring additional time and attention from plant engineers, scheduled cleaning and maintenance are a sound investment. As with all other plant operating systems, compressed air systems that have a planned maintenance program are less likely to have unexpected breakdowns. Simply put, less downtime allows for more production. In addition, consistent cleaning and maintenance practices help keep wear and tear to a minimum. This ultimately saves money in replacement parts.

The intercooler design is an example of the ease-of-maintenance design philosophy. Within a compressor intercooler, both U-shaped and straight intercooler tubes are industry practice. However, straight tubes are easier to clean than those with a U-bend design. An engineer can simply remove the water piping, unbolt the water box, and rod the tubes in place. Rodding is not possible with U-bend tubes found in some compressors.

In addition, intercooler tubes with a water-in-tube design are easier to clean and maintain than those with an air-in-tube design that require wire brush or chemical bath cleaning. This compressor features straight intercooler tubes with water-in-tube design for this very reason. The longer it takes to clean the intercooler, the longer engineered airflow is down.

Another example of important compressor components that benefit from diligent inspection and maintenance are journal and thrust bearings. These bearings help provide a stable and near frictionless environment to support and guide the rotating shaft. Properly installed and maintained, these bearings can last for extended periods of time. However, regularly scheduled inspections and maintenance keep them running reliably. The ethylene plant’s compressor features horizontally split bearings, which are easy to maintain, inspect, and replace. An engineer simply removes the top half of the gear case to service. No other disassembly is required.

Interchangeability of parts is another factor taken into consideration when designing the ethylene plant’s compressor system. Interchangeability contributes heavily to ease of maintenance. Interchangeable parts save time and money. Multiple stage air compressors use a bull gear and pinion system to power the impellers at each stage of air compression. The quality of the bull gears used directly determines whether they are interchangeable. The customized compressor uses high precision AGMA Quality 13 gears.

The American Gear Manufacturers Association (AGMA) provides established gear quality ratings, ranging from 3-15. These numbers signify the quality levels, or standards, developed by the AGMA that differ per application. AGMA Quality Level 13 gears, otherwise known as aircraft-quality gearing, are generally regarded as high-precision gears. They provide lower noise levels and, under normal operating conditions, have a longer performance life. More importantly, though, they provide interchangeability.

If the gear is AGMA Quality Level 12 or below and any one of the three pieces needs to be replaced, all three pieces—one gear and two pinions—need to be replaced. However, with Level 13 gears, rather than remove and replace all three pieces, the plant engineer needs to swap out only the component in question. This saves time and money in unnecessary replacement parts.

High level of control
The operation of multiple compressors feeding into a single plant air system needs to be coordinated, monitored, and controlled in order to accommodate various applications. An initial investment in innovative monitoring technology can ultimately pay for itself.

With that in mind, the customized compressor can be accommodated with a PLC-based automatic sequencer, which permits up to eight compressor units to communicate with one another and operate in sequence according to a programmed schedule. High-tech PLC-based automatic sequencers are capable of monitoring and matching compressor supply to demand. For example, they can select which compressors to use at any given time, shutting down those compressors not necessary to plant operations, even choosing back-up units if needed. By turning multiple compressors into one, an automatic sequencer can ensure stable system pressure, which allows the entire operation to run as efficiently as possible, saving both time and money.

In addition, a PLC-based modular control system allows for remote monitoring and diagnostic checks on the compressed air systems, helping to predict and prevent any systems malfunctions that could result in stoppage of engineered air. This can save money on repairs and replacements, as well as lost production time.

The most cost-effective systems, like the FS-Elliott Regulus control system, provide state-of-the-art technology and ease-of-use. The ideal control system should feature an easy-to-operate touch screen with a graphic color display. These features allow operators to view easy-to-understand graphics while monitoring the plant engineered air systems. An advanced system also provides easy adjustment of set-points and control mode changes using the touch screen. These flexible controls are fully adaptive, changing to meet the plant’s application-specific needs.

The control system also stores and logs operating data used for trend monitoring and preventive maintenance. This feature permits engineers to monitor and predict trends in their engineered air systems and act before problems arise. Extended monitoring of vital parameters is important for equipment protection, saving money in maintenance and avoiding repairs.

Ethylene plants are extremely energy conscious. A systems operation closer to surge lines saves power and minimizes wasteful unloading, while lower set-points and precise control minimizes energy usage. The advanced control system is able to make more efficient use of the ethylene plant’s manpower. With easy-to-use remote control and monitoring capabilities, this system can reduce the tasks of the ethylene plant operators, allowing them to shift focus to other plant responsibilities.

An engineered air system is created with specific plant applications in mind, to increase reliability and efficiency. The custom-engineered compressor we installed at the ethylene plant was designed with the following key points in mind:

• Durability and reliability—A robust engineered design combined with key system redundancies.
• Accessibility and maintainability—Moving parts designed with ease of access and maintenance in mind.
• High level of control—High-tech control systems provide for efficient operation and allow plant operations to predict and prevent possible problems.

Information supplied by Addison W. Kelley, vice president global customer support, FS-Elliott Co., LLC; (626) 855-7515

Typical Compressor Operation


Fig. 1. Ambient air enters the first stage through the inlet control device (A) where it is accelerated by the first impeller (B). A radial diffuser converts the air’s velocity into pressure before the air enters an efficient scroll casing (C). Next, the air is ducted through interstage piping into the first intercooler (D). The cooled air then flows into the second stage impeller (E). The compression process is then repeated through a diffuser, into a scroll casing, and then into the second intercooler (F). Air from the second intercooler then moves through a third impeller (G), diffuser, and scroll casing before being discharged into the after cooler and the air system.

back to text

Compressor Oil system


Fig. 2. The compressor was designed with a redundant oil system. During regular operations, the shaft-driven main is operating, while the motor-driven auxiliary is on perpetual stand-by for emergency situations, providing additional overall package protection.

Continue Reading →


6:00 am
July 1, 2005
Print Friendly

Maintenance Information Systems

Directory of EAM/CMMS software for maintenance and reliability organizations.

Enterprise asset management (EAM) and computerized maintenance management systems (CMMS) are essential to most maintenance and reliability strategies irrespective of plant size. The software must manage and optimize reliability and performance of plant physical assets and maintenance operations, support a company’s business process, and be tied in to business drivers. It must support a company’s overall asset management strategy. The software is key to information flow and moving knowledge from the plant floor up the organization to help run the business.

Buying decisions begin with an analysis of how a maintenance organization operates today and what its strategy is for the future. Total cost of ownership also needs to be considered. These systems can help organizations implement their strategy to decrease downtime, increase the use of their resources, reduce maintenance costs, and can be viewed as a communication tool to help make better decisions.

Software can help companies improve their business but no program will do everything the way users want it to, so compromises will need to be made. A previous article, “Managing an EAM/CMMS Project—Phase one: An unbiased team approach to system selection” (MT 5/05, pg 35) discusses ways to balance the wants and needs of various plant departments that are all looking to select a package that best serves their needs.

Using software to track all maintenance activities becomes critical as more companies establish best practices to drive continuous improvement and develop KPIs to measure their progress, Management support, strictly defined maintenance work processes, and ease of use have been identified as keys to success.

Maintenance information systems run on multi-platforms using mainframe, client/server, thin client, or browser-based applications. Smaller, stand-alone systems run on PCs or local area networks. Because some powerful packages can run on a single PC or networked PCs without a midrange server, the dividing line between small and large systems has blurred. Therefore, we are including all software packages in one directory.

Many companies offer programs specifically built to be accessed across the Internet. These Web-architected programs enable rapid deployment across a number of sites using a Web browser and established wide and local area networks. Multi-site organizations can benefit from a centralized data repository which allows for normalization and standardization across plants. Another variation of this method lets users access the program through the Internet but the data resides in their own plants.

Using these approaches, maintenance personnel can access information and work orders in a number of ways—dedicated terminals and PCs, or mobile Palm-type personal digital assistants (PDAs) and handheld computers running Windows CE. Other wireless and radio frequency devices to access information are also at hand. Developments including e-commerce, supply chain integration, the Internet, and wireless technologies that first were implemented in larger plants also are benefiting smaller and midsize plants.

Some companies offer an application service provider (ASP) option to their programs. Users pay a monthly fee to access the software through an Internet-enabled workstation. The ASP stores the program and the data on its server. Users always have access to the most current version of the program. This delivery method eliminates the need for on-site hardware infrastructure, system administration, and associated costs at the user’s end and lets companies concentrate on operating their plants rather than their computer systems.

To meet the needs of the increasing number of companies that recognize the benefits of electronic transactions, some software suppliers provide Web-enabled systems that support e-procurement within their own program or allow users to integrate their EAM or CMMS system with other vendor software.

Another growing area is connectivity with programs having the ability for data integration with other plant ERP business applications, production automation and control systems, and other software in the plant.

The directory provides basic information on systems from 45 companies. The listing of maintenance information systems is followed by .addresses, telephone numbers, and URLs of maintenance information system suppliers. A software/company index to help users find entries when only the software name is known make up the final section.

Information in the main listing is provided in five columns: Software and Company, General Information, Technical Information, Installed Base, and Relative Cost.

• Software and company. Entries are arranged alphabetically by supplier company with a separate listing for each software package. Some companies have indicated that their offerings are a functional module of a larger enterprise resources planning (ERP) system or can be considered an ERP system in their own right, and they are so noted in this column.

• General information. Basic system architecture is indicated by M for mainframe, CS for client/server, TC for thin client, B for browser, PC for standalone or small local area network, Mb for mobile, or A for application service provider . Some systems can be configured by more than one method. The staffing figures represent the number of people engaged in research and development activities and the number of people in all aspects of the EAM/CMMS business. Year of software introduction provides added insight to installed base figures in the next to last column.

• Technical information. Five lines of information provide a basic description of application support requirements (server hardware, server operating system, database manager, client operating system, and PC operating system).

—Server hardware. Some suppliers listed specific hardware requirements for their software. Other suppliers listed computer manufacturers.

—Server operating system. Type of operating system provides more information about the system on which various applications are designed to run. Unix, a popular system for midrange computers, may be listed as Unix or as a proprietary version offered by hardware manufacturers specifically designed for their machines.

—Database manager. The relational database manager used by a program is an important selection factor for organizations with other business or back office software. If the database managers are the same, it is likely that the EAM/CMMS can work with these other applications.

The database manager is a significant contributor to the performance of an EAM/CMMS. It handles procedures that otherwise would have to be written into the application software, adding to its complexity. Many EAM/CMMS programs are written to run with a variety of databases. Other programs are written for a single database, which allows them to make better use of the features and development tools provided by the database. ODBC indicates compliance with Open Database Connectivity, an SQL-based interface from Microsoft designed for consistent access to a variety of databases.

—Client operating system. This entry lists the operating system for the client portion of the software.

—PC operating system. This entry lists the operating system for PC or PC-LAN based systems.

• Installed base. Each company was asked to indicate the installed base for its software in four site categories: process industry (Pro); discrete manufacturing (Mfg); hospitals, schools, government installations, and other facilities (Fac); and utility plants (Util). The last number (Tot) represents the total installed base.

• Relative cost. Supplier companies were asked to indicate the typical cost for installing their systems in various size maintenance departments. Cost codes and installation sizes are listed at the bottom of directory pages.

Information for the directory was directly provided by suppliers who are actively promoting their products.

Continue Reading →


6:00 am
July 1, 2005
Print Friendly

Managing an EAM/CMMS Project

Phase three: Instilling a mindset of continuous improvement for system optimization.

The biggest payback opportunities in an EAM/CMMS implementation are often never realized. During product selection, goals were set for the application and specific business benefits were targeted, but the project scope was probably scaled back to “must haves” rather than the full scope of recommendations. During implementation, you kept a running list of additional features, functions, and data or process changes that were considered to be important to the success of the project but were deferred because the budget was an issue or the timing was too tight.

Even though the EAM/CMMS is now live and rolled out throughout the operation, you are not seeing all the benefits you know are possible—if only you had the chance. Implementing the “wish list” appears to be a pipe dream because priorities have shifted and project resources have been reallocated.

There never seems to be an opportune time to improve a business solution, which is why some refer to phase three as “phase never.” This article examines how to overcome this obstacle, and how incremental improvements can have a profound impact on return on investment (ROI).

Opportunity management
At each step in a project lifecycle, great ideas are postponed to an unnamed later date. As time passes, the details are often long forgotten. This obstacle to real improvement can be averted by committing to phase three at the outset of the project. That commitment, backed by organizational support and a methodology for continuous improvement, is an investment that will pay dividends for the life of the new system.

Instilling a mindset of continuous improvement early in the project ensures the switch does not turn off at go-live. From the inception of product selection and throughout the implementation, the project manager must systematically capture and “own” each deferred opportunity. The implementation audit after roll-out, as well as ongoing use of the system, will reveal fresh opportunities for improvement.

Pre-implementation deferrals

The cost benefit analysis report outlined direct financial benefits associated with a wide range of proposed improvements. That document was pivotal in obtaining necessary funding for the new system. However, project funding tends to be selectively focused on immediate needs that will return the greatest operational benefit and ROI.

An application module may not be considered mission critical, or a transition to handheld devices may be postponed. An electronic document management system (EDMS) could solve many problems, but now is not the time. Even if approved, certain planned improvements may be relegated to workarounds if the preferred vendor’s solution does not completely accommodate the requirement.

Business improvements commonly postponed to “phase never” include:

• Reliability centered maintenance (RCM). Many times, the cost benefit for a new system is based on RCM improvements. Although most EAM/CMMS systems contain reliability-centered functionality, it is not the same as having an RCM program. Although its benefits are undisputed, RCM program development is a time-consuming endeavor that is seldom conducted in parallel with new system implementation.

• Complete cost interfacing. Best-of-breed time management and materials management systems are sometimes not interfaced with an EAM/CMMS. Therefore, hourly rates, union contracts, per diems, call out premiums, shift premiums, and other factors that impact actual work order costs may not be factored in work management budgets and repair/replace decisions.

• Contractor time management. Contract services are often managed within the contractor’s own management system, and paper invoices, timesheets, and backup documents are manually entered in the EAM/CMMS. This error-prone procedure delays the ability to track contractor time and costs, when instead the contractors could be entering their data directly into the EAM/CMMS.

• Document management. The paper chase performed by engineering, planning, and craft personnel can be alleviated by implementing document management functionality or integrating to a separate document management system. CAD drawings, piping isometrics, piping and instrumentation diagrams, and exploded parts diagrams from vendor technical manuals are examples of the types of information that would save time if readily accessible.

• Project management. Project management capabilities within an EAM/CMMS system allow users to budget, approve, and manage project costs in real time. This gives time to react before a problem can escalate. The alternative is relying on financial system reports, which typically arrive 30-60 days after the fact, and learning too late that the project is in trouble.

Leftover opportunities like these are usually shelved until the new system proves its worth and/or personnel and money become available. The project team leader needs to own each of these issues and log them as post-implementation optimization candidates.

Implementation deferrals
During implementation, scope boundaries are often challenged due to unexpected developments. Data quality issues or a hardware limitation may be discovered. Product or process enhancements might be required. A new report or KPI may be needed, or an interface to some obscure system might have been overlooked.

When unplanned tasks disrupt the schedule, planned activities can get shortchanged. If the scope is strictly maintained, practical improvements may be indefinitely suspended. Throughout the implementation, any issue that is consciously diverted needs to be logged by the project manager as a candidate for future optimization.

Post-implementation deferrals
Before any optimization efforts begin, it is important to complete the implementation by ensuring the initial project objectives are being met. To accomplish this, project team members should be strategically placed with key end user groups for a short period following go-live. They answer any questions, provide further guidance on new business processes, and ensure a general level of comfort to the end users. If requested, the vendor or consultant will provide resources to supplement the help desk and provide detailed knowledge of the new software system and its interaction with the new business processes.

Periodically following startup, the project team will gather performance data to verify that productivity gains are on schedule. It is easier to justify optimization efforts if the new system has delivered measurable benefits and ROI assumptions are validated. Additionally, users should be polled for system satisfaction. Which processes are still not clear? Where do gaps exist? What tasks are still being handled manually and why? Which processes take too long or have too many steps?

In the days and weeks immediately following go-live, this process of ensuring all users are getting the most out of the system can bring to light further opportunities for enhancement. More may be discovered before the project team disbands during its evaluation of lessons learned. Whatever the source, each unveiled optimization opportunity must again be centralized with the project manager for consideration.

Ownership transition
When the implementation project team finally disbands, the list of accumulated improvement opportunities must change hands. Otherwise, it could vanish into the abyss of “phase never.” Now is the time to transition ownership of the list from the project manager to the user community, who will carry on the responsibility for continuous improvement within their day-to-day work processes.

The key to continuous improvement is to establish a user group that can communicate and manage the system needs of the user community. The user group may include some project team members, but also should include new resources representing the variety of departments and sites using the system. The group will start with the list transitioned from the project manager, and assess new software problems identified by the help desk, potential enhancements identified by end users, data improvement requests, and new interface requirements. Additionally, the group should evaluate patch releases before they are implemented to ensure that the fixes are worth the effort.

Driving new gains in familiar territory
It is not necessary, nor advisable, to implement every recommendation. A cost benefit analysis should be performed for each proposed optimization activity. Even those previously quantified during product selection should be re-evaluated. The user group will weigh and prioritize the tasks according to business need, and benchmark them against best practices. The group will determine whether system, security, or database tuning can resolve the problem. Less-expensive workarounds should be considered if they can make a noticeable improvement. Only the most beneficial opportunities should be presented for management authorization and resource assignment.

You should be experiencing déjà vu by now. Although a user group has replaced the original project team, optimization follows the same project lifecycle as a new system implementation. Activities must be justified, planned, developed, tested, trained, implemented, and supported after go-live. Continuous improvement, therefore, is truly a cyclical process and you have essentially returned to phase one.

Previous articles were “Managing an EAM/CMMS Project—Phase one: An unbiased team approach to system selection” and “Managing an EAM/CMMS Project—Phase two: Best practice methodologies for system implementation”.

C. Scott MacMillan and Lance Morris are principals of Cohesive Information Solutions Inc., 8215 Madison Blvd., Ste. 150, Madison, AL 35758; telephone (877) 410-2570

System Optimization Process Flow


The processes that are part of EAM/CMMS project optimization.

Continue Reading →


6:00 am
July 1, 2005
Print Friendly

The Misunderstood Orphan of Reliability

Run-to-failure components require proactive corrective maintenance strategy.

The concept of run-to-failure (RTF) is widely misunderstood. Most people, engineers included, will provide the automatic response that if a component fails and nothing happens, it is a run-to-failure component. Another prevalent, but totally incorrect, assumption is that having redundant components or redundant systems automatically means the component or system is run-to-failure. These misconceptions are recipes for disaster.

Unfortunately, RTF has become the misunderstood “orphan” in the picture of reliability. The RTF definitions that exist today do not adequately address the true meaning of RTF, whether they are found in a reliability centered maintenance (RCM) publication, a regulatory publication, or any other publication. The standard definition for RTF usually reads something like: “The component is allowed to fail without the requirement for any type of preventive maintenance” or “Run-to-failure is a policy that permits a failure to occur without any attempt to prevent it.”

These definitions are far too shallow to prevent the mismanagement of this very important concept. The time has come for a very precise and prescriptive definition for identifying when a component can be classified as RTF. I have termed this “The Canon Law For Run-To-Failure.” See accompanying text “The Canon Law for Run-To-Failure.”

The Canon Law For RTF is very specific. It goes beyond the traditional definition of RTF: preventive maintenance (PM) is not required prior to failure. There is no mention that corrective maintenance is required in a timely manner after failure.

However, that is only part of the RTF story. There are several other qualifiers before a component can be classified as RTF.

RTF components are understood to:

• have no safety, operational, or economic consequences as the result of a single failure.

• be evident to operations personnel when they fail.


The Canon Law For Run-To-Failure

An RTF component is designated as such solely because it is understood to have no safety, operational, or economic consequence as the result of a single failure. Also, the occurrence of the failure must be evident to operations personnel.
As a result, there is no proactive preventive maintenance strategy to prevent failure. However, once failed, an RTF-designated component does have a proactive corrective maintenance strategy commensurate with all other components based on the plant conditions at that time.

—Neil Bloom



RTF components are important
RTF components have been mistakenly designated as unimportant because they have no significant consequence as the result of a single failure. However, after failure, the component is required to be restored to an operable status via corrective maintenance in a timely manner.

RTF does not imply that a component is unimportant. It is just that some components must have a preventive maintenance strategy and RTF components do not. However, all components, even RTF components, are important to reliability and must have an equivalent corrective maintenance strategy, commensurate with all other components, and prioritized accordingly, based on the plant conditions at that time.

RTF components are designated as such due to the failure being evident and having no significant consequence as the result of a single failure. If it did not matter whether a failed component was restored to an operable status in a timely manner, one would question why that component was even installed in the plant.

Similarly, if the failure was forever hidden and no one ever knew about it and it did not matter how many additional multiple failures occurred, one also would question why that component was installed in the plant. The limited exceptions to this logic would include components that are used strictly for convenience and have no pertinent function.

Another major misconception in regard to an RTF component is that fixing it when it is broken is either optional or it has no important consideration for a timely repair. This is absolutely incorrect.

So often, engineers and senior management embrace the belief that RTF components are like secondhand junk cars—not worthy of even worrying about either before or after they fail. However, that line of reasoning is tantamount to having a flat tire, putting on the spare, and throwing the flat, with the nail embedded, back into the trunk and never worrying about it again.

One reason for this misguided logic is that preventive maintenance historically, and RCM specifically, has focused only on critical components to the detriment of all others. Another reason is the RTF terminology itself. It has somewhat of an ominous connotation.

I can remember several occasions when I had to use the words that a specific component was “governed by corrective maintenance” just to avoid using the RTF terminology because the receivers of the information in the conversation were not sufficiently astute to accept an RTF component as being anything other than totally irrelevant.

Corrective maintenance strategy needed
The fact is that all components are important to reliability, even RTF components. All components must have an equivalent corrective maintenance strategy.

A total proactive maintenance program includes corrective maintenance as well as preventive maintenance as integral parts of its strategy. Preventive maintenance is a strategy to prevent component failures. Corrective maintenance is a strategy to fix components once they have failed or have become degraded. These two entities are performed integrally to prevent a failure consequence at the plant level.

Having an RTF component as part of the maintenance plan may seem startling at first, but once you think about it, it becomes quite clear. The ultimate objective of a maintenance program is to prevent a consequence of failure at the plant level. Preventive maintenance tasks are specified to prevent component failures that can have either an immediate unwanted consequence of failure or the potential for an unwanted consequence of failure at the plant level when they fail. Corrective maintenance is specified to eliminate or reduce the vulnerability of a plant consequence should an additional component fail while any component, including an RTF component, is in its failed state.

If you do not impose a proactive corrective maintenance strategy, you run the risk of an unwanted plant consequence with an additional failure. Therefore, an RTF component that has failed must be fixed in a timely manner.

After a component has failed, whether it was governed by a preventive maintenance strategy or an RTF strategy, it is prioritized for corrective maintenance with an equivalent relative importance based on the plant conditions at that time. For example, what other equipment is out of service, what equipment performance levels are in an alert state, what associated equipment is planned for replacement, etc? This requires a decision process, usually by Operations and Engineering, which considers all pertinent factors in attempting to prevent any possibility of a failure consequence at the plant level.

The traditional vision of preventive maintenance, which I refer to as the smaller picture, is to prevent failures at the component level, prior to the component failure resulting in an unwanted plant consequence. However, I like to think in terms of the bigger picture of preventive maintenance which is to prevent an unwanted consequence of failure not only at the component level but also directly at the plant level. To do so includes addressing and prioritizing corrective maintyenance within a total proactive maintenance strategy, and RTF components are an integral part of the bigger picture.

Neil Bloom is an RCM and preventive maintenance program consultant and author of an upcoming book entitled “Classical RCM Made Simple” to be published by McGraw-Hill later this year. He has spent more than 35 years in engineering and maintenance management positions in the commercial aviation and nuclear power industries.

Continue Reading →