Archive | September, 2007

188

6:00 am
September 1, 2007
Print Friendly

The Best And The Worst

ken_bannister

Ken Bannister, Contributing Editor

I recently delivered a lubrication fundamentals seminar to a group of maintainers, whose first language was not English. My opening slide depicted a 10-year-old child during the Industrial Revolution, whose job was to ensure all line shafts used to power textile mill machinery were lubricated effectively. The above Dickens quote framed the image and worked well to capture the essence of the period—or so I thought. Once I realized that the class participants were unfamiliar with Charles Dickens, A Tale of Two Cities and the Industrial Revolution, I had to explain to them my thinking behind the rhetoric and double meaning used in the slide. In doing so, I was forced to reflect on my rationale for using the slide and was surprised at just how timeless and meaningful the words and image are.

Dickens crafted his “best of times, worst of times” line—a line that became one of the most famous openings in English literature—as a way to set a tone for his portrayal of events leading up to the French Revolution. Interestingly, this historical novel was written as another important revolution was taking place—one of the greatest to date—the Industrial Revolution.

Although Dickens was alluding to the contrast between “modern” 18th century ways of life and thinking in London and Paris, and the “traditional” brutality and suppression carried out by nobility and peasants alike, his thoughts also were likely influenced by the sweeping industrial and technological changes swirling about him at the time. No wonder his words seem so profound and insightful, and that they continue to be as relevant today as when he wrote them. Take, for example, the changes occurring within our own Information Revolution.

Never before has the world witnessed such sophisticated levels of technology and communication. As a result, however, we have become so reliant on technology that we have for the most part forgotten the fundamentals, forcing ourselves into a pervasive “replace vs. repair” mentality. This approach, though, stops being a viable strategy once the technology becomes depreciated, with no more replacement parts available.

Although today’s communication is enacted at lightning speeds, the art of correspondence seems to be failing just as rapidly. While we appear to be enthralled by the amassing of vast stores of data, rarely do we take the initiative or time to turn this data into information through which true management decisions are made.

We also appear to have become so preoccupied with predicting failure that many have neglected— or have never learned—the basics of effective planning and scheduling to get the impending failure addressed prior to a catastrophic event. Likewise, cleanliness and lubrication, the cornerstones of virtually every new and existing physical asset management strategy, have never been better understood. Still, many companies today continue to neglect these most fundamental of machine care tactics.

Maybe now is the time to do things better. Let’s take a leaf out of the “lean” strategy manual. Let’s slow down the pace. Let’s really know what we are trying to achieve. Let’s build management strategies based on clear communication and the understanding of maintenance fundamentals when laying out our programs’ foundations. To borrow more words from Dickens, these from the final line of a Tale of Two Cities, “It is a far, far better thing I do… ” Today, we all can do better. Good luck!

Ken Bannister is lead partner & principal consultant for Engtech Industries, Inc. Phone: (519) 469-9173; e-mail: kbannister@engtechindustries.com

Continue Reading →

159

6:00 am
September 1, 2007
Print Friendly

Data, Data Everywhere!

How do businesses and organizations within a business, including Maintenance, navigate the data continent?

We go to great lengths to capture it, store it, organize it and, ultimately, use some of it! Data is all over the place within our organization, whether it exists in electronic or hardcopy format, and we constantly are looking for more of it. Data has become an essential component in our business toolbox. When faced with a business decision, we look for data to bolster our wisdom and experience. Within our information environment, however, the never-ending quest to use data efficiently and appropriately is constantly being challenged by the validity and source of the data.

Navigating the data
Most organizations today have many transaction systems to support their business processes. These include financial systems, production systems, maintenance management systems, customer service systems and the like. Transaction systems record events and subsequently store the information within their respective database environments. In some cases, many of these systems share the same database as a result of integration. In most organizations, though, islands of data and information often are the norm, as the transaction systems are truly stand-alone and never integrate with other systems. This is not necessarily a poor business process since information requirements vary within business functions.

Consider for a moment the information required to manage the maintenance organization on a day-to-day basis. This information includes work orders, spare parts and labor resources, to name a few. Do the folks in the accounts receivable department need to have access to this data? Probably not. However, this information is contained within the maintenance management system to enable the maintenance department to track and record their maintenance activities. Over time, this information continues to grow and grow within the database. At some point, some part of this data may be retrieved in the form of a report, chart or spreadsheet in order to examine trends or status. One of the primary roles of these transaction systems is to record and store data.

Now, imagine many transaction systems recording and storing data in different places. Just think for example, every time you dial the telephone a transaction is recorded—the date and time, the number called and the length of the call are just some of the data elements stored. Considering that there are many transactions being recorded from multiple transaction systems, how do businesses consolidate all this information and why?

The most common way to bring your data together is data warehousing. Simply put, a data warehouse is a collection of information within a single information environment, usually located on dedicated computer hardware. This type of strategy provides the user community with a common location to look for and retrieve information. Sounds easy and straightforward, but as we have learned, there are always challenges to overcome.

Dealing with the challenges
First and foremost, remember “garbage-in, garbage-out.” Data warehousing will not ensure the quality or validity of data. To a certain degree, this is one of the jobs of the transaction system, but the ultimate responsibility lies with the source and entry point of the data. As in any analysis and decision process, the supporting documentation and information must be of impeccable quality.

Secondly, be prepared for a significant effort at the beginning to initialize the data warehouse with all the appropriate data from the appropriate source. It is sometimes easy to visualize this effort as filling a wheelbarrow with shovels of data from different piles of information created by multiple transaction systems. At the very least, what you end up with is a wheelbarrow full of stuff that has little or no meaning. It becomes the duty of the responsible data warehouse to not only define the appropriate source, but also define the appropriate relationships between the data elements so that retrieval performance can be optimized. This becomes an ongoing task as new transaction systems are added, removed or upgraded.

Implementing the appropriate tools to retrieve, analyze and format the information is another important component of utilizing any data warehouse or data storage configuration. It is essential that the capabilities of end users be considered when providing a reporting tool. End users should not need a degree in computer programming or computer science to be able to retrieve and present desired information.

“Data mining” is a term that’s frequently used to define the process of retrieving data. While purists may correctly insist that data mining is more than just retrieving data from a data source, the casual user thinks of data mining as the activity of extracting data from a database—whether it is from a data warehouse or a transactional system. Fortunately, there are many tools available today to perform this function, from spreadsheet software to sophisticated business intelligence toolsets. Presentation of retrieval results can vary, including reports, information dashboards, multi-dimensional charts, etc. Remember, all this capability requires quality data, training of users on the right tools and enough computing power to process the requests.

Often, the question is asked why not provide the data retrieval tools to access the transaction data environment? Think about the transaction system in use in your job today and the last time someone (or you) attempted to retrieve a large amount of information (maybe accidentally) and the resulting decline in system performance as evidenced by the moans and groans coming from an adjacent cubicle. Where there are many users on the transaction system, the opportunity for this problem to occur increases. In an effort to keep it from happening, data warehouses typically are installed on their own computer—which can be sized appropriately for this activity without competing with transaction processing.

Security of information is another challenge facing the implementation of a data warehouse and data access tools. Where the transaction systems have their own security as to who can see certain data and perform certain tasks, this same capability has to be implemented within the data warehouse for obvious reasons.

0907_data_1Data warehouse questions
The previous paragraphs have identified some of the considerations for a data warehouse environment. The greatest advantage for a data warehouse is a common set of data elements for use by the organization. This eliminates “my data doesn’t agree with your data” situations since all of the information is coming from the same place. Of course, timing is constantly an issue as data is always retrieved at a point in time and, as a result, is continually changing. Typically, where there is a great quantity of information from numerous transaction systems with many users, the benefits of data warehousing far outweigh the challenges

What about small- to medium-sized organizations? Does the data warehouse strategy make sense? Such questions should be considered on a case-by-case basis.

For many small companies, data warehousing is probably unnecessary due to the volume of data and number of users. For medium-sized companies, the strategy is dependent upon two factors: the larger the data volume and number of users retrieving the data, the more appropriate the strategy becomes.

Using the right tools
Today, there are many tools available within the transaction software to retrieve and present data. Don’t let that, however, obscure the fact that evaluating and analyzing the data is still the most important activity.

0907_data_2

Remember, too, that trends may be more important than snapshots. For example, losing a baseball game in the middle of a major league season is only a single event—losing eight consecutive games in the middle of the season is a trend that requires corrective action. Similarly, within the maintenance arena, schedule compliance being low for a week is not likely cause for concern. On the other hand, a trend downward for several weeks does require attention.

“Information overload” and “analysis paralysis” are terms to keep in mind when determining data and information requirements. Collecting, storing and retrieving data that does not provide value is a waste of valuable resources. To get the most out of information systems, improve the quality of that data which will facilitate value decisions, while using appropriate tools to improve and enhance the performance of the organization. Although effort is noble, in the end, it’s results that count!

C. Paul Oberg is president and CEO of EPAC Software Technologies, Inc., a developer and integrator of Computerized Maintenance Management Systems based in East Greenwich, RI. A Certified Management Consultant, he has significant experience in operations improvement, productivity improvement, manufacturing/distribution management, Total Quality Management and the design and implementation of manufacturing systems. Oberg’s professional affiliations include the Institute of Management Consultants, Institute of Industrial Engineers, Society for Maintenance & Reliability Professionals, and the American Production and Inventory Control Society. Telephone: (401) 884-5512; e-mail: Cpoberg@epacst.com

Continue Reading →

272

6:00 am
September 1, 2007
Print Friendly

Embrace "Firefighting" Maintenance

Comparing the operations of a typical fire department with the operations of a well-run maintenance organization is not a stretch. The parallels go on and on.

For example, according to Chief Scott Tegler of the Woodstock Ontario (Canada) Fire Department, fire crews strive to attain a mean time to response target of less than four minutes, from the time the 911 call is received, to the time the fire engine and crew leaves the station. This important key performance indicator (KPI) is comparative to the response component of the meantime- to-repair (MTTR) indicator. Used to define the level of maintainability within a maintenance department, MTTR is one of a number of KPIs that denote the service level performance provided to the customer and measure effectiveness of the many preventive programs being carried out when not fighting fires.

Prevention in the firefighting world plays an important role in ensuring a “state of readiness” and confidence in knowing the equipment will perform exactly to specifi- cation every time. Preventive maintenance of all equipment is scheduled on a daily, weekly and monthly basis. It includes operation checks, lubrication and mandatory parts replacement. Much of this work is carried out by the firefighters themselves in a Total Productive Maintenance (TPM)-like fashion, promoting intimate knowledge and ownership of the equipment.

Planning the work/working the plan
One of the most striking parallels between a fire department and a maintenance department is seen in the prevention/job preparation component. In a fire department, job plans are put together in a Reliability-Centered Maintenance (RCM)-like fashion for all industrial facilities within the department’s geographical area of responsibility. Each shift is assigned a segment of the community and puts together a pre-plan for fighting a fire at each company. Typical plans include Emergency Response Plans (ERPs), type of operation, emergency contact personnel and numbers, chemicals and toxic waste kept on site, building construction type, etc. This information is then shared with all staff. Should an emergency occur, the suppression crew is now in possession of a planned response to virtually any condition they may find at the incident site.

Another important aspect of prevention is carried out through regular fire code inspections of building structures throughout the community, not unlike equipment condition checks performed by maintenance on a daily basis. This activity keeps the fire department in touch with its customers and allows the parties to establish a good working relationship. Categorizing and tracking incident causes, not unlike Fault Code Analysis (FCA), also allows the fire department to put together an education program aimed at future prevention of similar incident occurrences. Program success is measured through the reduction of incidents over a set time period, much like it is with equipment reliability.

Training/training/training
Training, as with any World-Class organization—maintenance included—is foundational to achieving a minimum level of competence and rapid response. All fire departments train to the International Fire Service Training Association (IFSTA) essentials of firefighting generally accepted standards. Firefighters are certified by examination on both theory and practical components and are expected to renew their certification every five years in a similar fashion to tradespersons.

Walk into any good maintenance department and you will find mapped-out work flows and standardized operating procedures. These are used to gain consistency in how an operation is performed—and to quickly train and refresh the maintenance personnel in the operational and work method requirements.

Similarly, fire departments develop and utilize Standard Operating Guidelines (SOGs) as part of their training process. SOGs are operational guidelines that detail the purpose, scope and procedure to be followed for most operations performed within the department. These things can include vehicle operation, wearing of protective clothing, use of specialized equipment and maintenance and performance conducts. These guidelines are living documents that are reviewed and updated on a regular and “as needed” basis. Although most SOGs are standardized across all fire departments, each individual department can and does tailor them specifically to its exact needs.

Another training method used for consistency and preparedness involves the use of 5S techniques. Based on these techniques, every piece of equipment, including clothing, is assigned its own space. This is a crucial factor in a fire department’s ability to achieve rapid response times.

Working as a team
To ensure that team members really work as a team, fire departments subscribe to a simple four-step plan of action for every response:

1. PLAN…Understand what can occur, prepare and prioritize actions that must occur.

2. BRIEF…Inform team of the plan and discuss prior to an event and enroute to an incident.

3. EXECUTE…Perform the required tasks.

4. DEBRIEF…Discuss what happened. This type of four-step plan is a hallmark approach in assuring availability and reliability through the understanding of failure.

Redefining “firefighting”
The successful operation of a fire department relies heavily on a total proactive approach to every aspect of its job. The seemingly reactive response to a fire alarm could not be a more planned event. A well-run maintenance department can draw many parallels to the operation of a fire department. Thus, the next time you are tempted to comment on a maintenance department as operating in a “firefighting” mode, mean it as a compliment!

HELP YOUR FIRE DEPARTMENT HELP YOU

In times like these, where so many companies and sites are concerned about safety and security around their facilities, there are many things a plant maintenance department can do to help its local fire department prepare for and deal with a crisis situation.

Fire Chief Tegler points out that most companies/sites fail to understand that when they are performing a certain activity within the organization, they are 100% responsible for mitigating and having the ability to cope with potential negative consequences. This means a company/organization must perform a risk analysis on all plant operations and assess the training necessary to facilitate disaster prevention along with the resources required to manage any potential disaster that may occur.

“Many times,” Tegler says, “I have sat down with safety committees of various organizations and seen that their ERP plans have placed responsibilities such as evacuation and hazard rescue as the entire responsibility of the emergency responders.” As he notes, however, “It is impossible for us (the fire department) to cope with all elements associated with a disaster.”

Be aware that not every fire department is trained in all the same disciplines, such as confined space rescue, trench rescue, hazardous materials or medical response—especially in rural areas that use volunteer firefighters. Strike up a dialogue with your local fire department and determine its specific capabilities. You may have to build contingency into your ERP if your special needs are not met by your local fire department.

Keeping company/site records up-to-date, Tegler adds, is another particularly tough challenge fire departments face. That means keeping all relevant contacts current, including information on important company/ site 24-hour contact personnel.

Be proactive
The following list details a number of proactive actions a company can take in reducing risk.

  1. Provide a plant information dossier containing:
    a. A fully documented Emergency Response Plan (ERP) that includes evacuation and hazard rescue information, delegating responsibilities to assigned personnel;
    b. A current fire exit and suppression system mapping of your facility;
    c. An up-to-date 24-hr contact listing of emergency personnel that includes e-mail, office, home, pager and cell phone numbers;
    d. A listing of all chemicals, lubricants, gases and other hazardous materials (include raw materials used in the manufacturing process) stored inside and outside the plant, complete with Material Safety Data Sheets (MSDS) and locations in building;
    e. An up-to-date building drawing.
  2. Update the above list on a four-tosix- month interval basis.
  3. Work with your fire department to ensure that your facility is compliant with the latest fire code standard, allowing the fire department to perform necessary audits.

Taking these simple steps is an exercise in due diligence that may not only help minimize your risk of incident occurrence, but also reduce your insurance rates. This list, however, is offered only as a reminder of some things you can do. You MUST check with your local authorities, corporate safety and security entities and risk management/insurance carriers and consultants, and comply with their specific requirements.

THOUGHTS ON 9•1•1 MAINTENANCE

Who among us doesn’t remember the 11th day of the 9th month, 2001? Known simply and collectively as “911,” the events of that fateful day changed our world forever. I, like you and countless others glued to our television sets around the globe, saw much of it unfold in real time—jet aircraft being purposely crashed into buildings in New York City and Washington, DC, taking so many souls, and, to some extent, our innocence as nations with them.

Through the smoke, however, we witnessed something that made us very proud—innumerable acts of great courage and leadership in the face of seemingly insurmountable human challenges. This was especially true in the case of the New York City Fire Department and so many other fire departments that selflessly gave of themselves to rescue others and bring the Twin Towers and Pentagon situations under control.

Watching the events play out that day—and during the days and weeks that followed—I couldn’t help but marvel at the professionalism and state of preparedness exhibited by the fire crews under the most harrowing conditions. In the years since, I often have found myself reflecting on those images, as well as on memories of my previous visits to firehouses. Among the things that have come to mind have been the cleanliness and neatness of these facilities—where everything has a place; the spotless gleam of the fire engines; the carefully rolled and positioned hoses; and the firemen’s gear— always laid out in a “ready-to-wear and ready-for-action” manner.

Thinking back on these things, I began to question the use of the “firefighting” label to describe the worst state of maintenance—wherein a maintenance department responds to breakdowns in a “first come-first served” unplanned and non-scheduled way. Prompted to find an answer that would help put this apparent paradox to rest, I sought out Fire Chief Scott Tegler of the Woodstock Ontario (Canada) Fire Department.

Through my visits with Chief Tegler, I hoped to gain a better understanding of how a typical fire department goes about its business. As documented in this article, what I saw and learned reflects what I consider to be a virtual model for World-Class maintenance—the equivalent of a truly Lean maintenance department performing both proactive activities and condition-based planned responses to incidents.

Contributing editor Ken Bannister is managing partner and principal consultant for Engtech Industries Inc, based in Innerkip, ON, Canada. Engtech provides a wide range of production & maintenance management consulting and training services for national and international clients throughout industry. Internet: www.engtechindustries.com; telephone: (519) 469-9173; e-mail: kbannister@engtechindustries.com

Continue Reading →

451

6:00 am
September 1, 2007
Print Friendly

Consider Motor Load Requirements & Applications

Ensuring the type of service and efficiency you want from your motors begins with better motor management on your part. That includes knowing how to size them appropriately.

Most motors are run continuously with little variation in load. A continuous duty motor is energized and loaded for an extended period of time. When the motor is started, the temperature increases, then it stabilizes after some time.

If a motor has been designed with a service factor, it is possible to run it at a higher than rated load for short periods of time without significant thermal damage to the windings, rotor or bearings. A motor to be used with a continuous load is sized based on that load rating.

There are, however, many applications where a motor is not loaded consistently throughout its duty cycle, or is energized intermittently.

Some motors are started and stopped often, while others are loaded lightly for some time, then more heavily for some time. As a result, the applied load can vary greatly.

If there are periods of time where the motor is operating at less than full load, then it may be possible to size the motor smaller than the maximum load level.

Efficiency considerations
If it is possible to use a motor that is rated below the maximum horsepower level required, the obvious advantage is the initial cost. However, the lifetime cost of the motor also will be lower, since the overall efficiency will be higher. If a larger motor is used, for most of the duty cycle the motor may be running lightly loaded, and consequently at a lower efficiency and power factor. A motor is most efficient when the load is close to full. If the motor horsepower is lower than the peak level, it will not be so lightly loaded for most of the duty cycle, and will run more efficiently.

An intermittent duty cycle is one where the motor is subject to periods of load and no load and/or rest. These motors are sized based on the horsepower requirements of the load.

There are obvious concerns with heating of the motor if it has an intermittent duty cycle. If a motor is started too many times in succession, without being given sufficient time to cool down, the rotor may heat up to the point where it melts because of the heat generated, or the stator winding may fail prematurely.

If the duty cycle consists of periods of load and no load, then heating is not as much of a concern because there is still airflow as long as the rotor is turning (assuming an internal or external fan is present). Typical time ratings for intermittent duty motors include 5, 15, 30 or 60 minutes.

Load variations
For applications with a repetitive duty cycle, the load varies at specific intervals of time. These intervals generally repeat and do not change during the duty cycle of the machine. The actual loads, however, can vary widely, from almost no load to more than full load of the motor used in the application. An example of this type of application would be an injection molding machine.

The root-mean-square (RMS) value of the horsepower over one cycle can be calculated to estimate the possible heating effect on the motor. The RMS horsepower is the square root of the sums of the horsepower squared, multiplied by the time per horsepower; divided by the sums of all the time intervals. To determine the RMS load on the motor, use the following equation:

0907_motorload_eq11

0907_motorload_fig11As long as the RMS horsepower does not exceed the full load horsepower of the motor used in the application, the motor should not overheat. This, of course, is only true as long as there is adequate ventilation during the entire cycle. To keep it simple, we have disregarded the effect of acceleration time on a self-ventilated motor.

Example…
To properly size a motor for varying, repetitive duty, you will need to know the duration and horsepower load for each. It is helpful to develop a graph showing the required horsepower vs. time, as shown in Fig. 1, as well as a visual that lists each time and horsepower, as shown in Table I. Using the RMS horsepower for this example gives the following result:

0907_motorload_eq21

0907_motorload_table11In this example, the RMS horsepower is 39.3. To allow for voltage variations, as well as to provide a little extra margin of safety, the motor for this application can be sized at 40 hp with a 1.15 service factor or at 50 hp with a 1.0 service factor. Neither of these motors would overheat in this application.

Make sure that the motor actually can deliver the maximum required torque. This means that the breakdown torque (BDT) of the motor must be higher than the highest horsepower load torque throughout the duty cycle. If the motor cannot deliver this torque, the motor may stall.

Using our example, a 40 hp, 4-pole, Design B motor will have a minimum breakdown torque of 200% of the full load torque (from NEMA MG-1 requirements). To determine the percent breakdown torque needed for the peak horsepower, use the following equation:

0907_motorload_eq31

 

 

 

 

Since our maximum required horsepower only requires 150% of the full load motor torque of the 40 hp motor, it can be used in this application.

It is important to keep in mind the fact that this type of analysis only works for applications where the duty cycle is relatively short. Any complete cycle that is longer than approximately five minutes will require a more involved study of the load and duty cycle. There are, however, many applications where the repetitive load cycle is much shorter and the RMS horsepower can be used to size the motor.

Cyndi Nyberg is a technical support specialist with EASA (www.easa.org).

Continue Reading →

272

6:00 am
September 1, 2007
Print Friendly

Non-OEM Pump Rebuild Shops: Guideline Details

The first installment of this series highlighted general guidelines regarding the selection of competent non-OEM pump repair facilities. This month, these guidelines are discussed in more detail.

0907_pumprebuild1You get what you inspect. That said, a pump user must have a repair specification. It may or may not be identical to the specification used by the non-OEM competent pump repair shop (CPRS). Where the specification or checklist of the CPRS differs from the one of the user/purchaser, the issues need to be explored and the ramifications of any deviations understood. At that time, waivers are issued and details of the understanding are documented.

In any event, unless a process pump manufacturer gives specific and different values or measurements for a particular make, size or model, experience shows the guidelines in this article to be useful—and valid. Even an “in-house” pump shop would benefit from making it a habit to use and apply the following assembly dimension checklist. Some of the listed diametral clearance and/or interference tolerances will be stricter than what certain pump manufacturers allow (for reasons of internal cost savings, perhaps). But, then again, this simply illustrates the opportunities to improve on some OEM products.

Best-of-Class user shops often make copies, laminate them and either hand them to each of their shop technicians or post them near mechanic/technician workstations. CPRS facilities use similar approaches to disseminate the information in Sidebar 1, “Best-of-Class Pump Specifications,” to their staffs.

Beyond the actual specifications listed in Sidebar 1, there are other Best-of- Class type guidelines to consider when rebuilding a pump. A CPRS certainly considers them.

Shop tools and equipment
The use of proper shop tools and shop equipment is of critical importance to reliability-focused pump users. While it is well beyond the scope of this article to explain all shop tools and their proper use, the following examples will highlight the issue.

Take, for instance, collet drivers for certain impellers. Unless this special tool is used for vertical turbine pumps (VTPs) equipped with tapered collets, the impeller will not be secured properly to the pump shaft.

Likewise, unless a shop uses the proper heating technique for bearings and coupling hubs, achieving quality workmanship will be nearly impossible. Therefore, a modern eddy current (induction) heater is high on the list of necessary shop equipment.

0907_pumprebuild_pq1Accurate dimensional mapping of pump casing and rotor geometry requires contour measuring equipment. Some of this equipment is portable, often called Coordinate Measuring Machines; other types are fixed machines. All use specialized software. Despite this type of built-in automation, expertise on the part of the CPRS employees is always a great advantage.

The physical tools also include process control procedures and quality verifications steps. The policies of a CPRS are enunciated and written copies are made available to anyone who asks for them. There are no secrets—and competent shops will not attempt to be secretive. The CPRS will share this information freely with the customer.

Rotor balancing
All impellers, irrespective of their operational speed, should be dynamically (“spin”) balanced before installation, either single or two-plane. Two-plane balance is required for a wide impeller, when the impeller width is greater than one-sixth (1/6) of the impeller diameter. ISO balance Grade 2.5 is recommended here.

There is no doubt that dynamic balancing of the three major rotating pump components—shaft, impeller and coupling—will increase mechanical seal and bearing life. All couplings of any weight or size should be balanced, if they are part of a conscientious and truly reliability-focused pump failure reduction program. Couplings that cannot be balanced simply have no place whatsoever in industrial process pumps.

The preferred procedure for balancing a rotating unit is to balance the impeller and coupling independently, and then the impeller and coupling on the shaft as a single unit. Another method is to balance the rotor one time only as an assembled unit.

Increasing pump efficiency
Computer tools available to the CPRS often will favor designing new, more efficient impellers. Certain installations, however, also will benefit from a simple change to another, already existing and tested impeller geometry. Some impellers are readily available as low-, medium- and high-capacity configurations. Seriously consider a CPRS that knows and shares regarding pump efficiency matters.

Low- and high-capacity impellers. . .
In the majority of pump casings, it will be possible to install impellers of different widths for low- or high-capacity performance. Because of the variations in the design of the impeller vanes (angularity and number of vanes), it is somewhat difficult to predict their performance. Still, for a given impeller with a given angularity and number of vanes, one can reasonably anticipate the performance of the narrow, medium and wide impellers. In Ref. 1, the reader will find actual test data comparing the performance of impellers with three different peripheral widths. In the example shown in Ref. 1, the “normal” impeller exit dimension was 2”, whereas the high- and low-capacity impellers were 2.75” and 1”, respectively. Capacities ranged rather widely from 5000 gpm to 9000 gpm, and efficiencies bracketed 82% to 88%. It also can be said that the performance of different impellers in the same casing is, to some degree, related to specific speed and that efficiency increases with higher capacity impellers. “Over-sizing” of impellers, though, is rarely recommended.

0907_pumprebuild_table1

Impellers with different numbers of vanes. . .
Certain pump applications require the pump performance curves to have differently shaped head-capacity curves. For instance, to overcome friction only, as in pipeline service, the highest head per stage, or a very flat curve is desirable. To overcome static head or to have pumps run in parallel as is customary in process or boiler feed services, a continuously rising head-capacity curve is usually needed for highest possible efficiency.

0907_pumprebuild_pq2There are different ways to vary the shape of a head-capacity curve and generalizations are not always accurate. The following four points, however, should be noted as they again show why working with a resourceful and knowledgeable CPRS is so very important.

1. If, say, the existing impeller has six or more equidistant vanes, producing a new impeller with only five equidistant vanes may be of interest. The fewer the number of vanes, the steeper the curve. When vanes are removed, the effective fluid discharge angle decreases due to increased slip. This moves the peak efficiency flow point to the left. The efficiency will also drop, the lowest being at the least number of vanes. In the case of a seven-vane impeller reduced to four vanes, the efficiency will drop about four points.

2. If a different head-capacity curve shape is required in a given casing and the same peak capacity must be maintained, a new impeller must be designed for each head-capacity shape. The steeper the head-capacity curve, the fewer will be the number of vanes, with a wider impeller used to maintain the best efficiency capacity. For example, a 7-vane/27-degree exit angle will have a flat curve and a narrow impeller, whereas a 3-vane/15-degree exit angle will have a steep curve and the widest impeller. In other words, to peak at the same capacity the impeller discharge area must be the same, regardless of headcapacity relationships. Also, for a given impeller diameter, the head coefficient will be the highest for the flattest curve. The efficiency of the above impellers can be maintained within one point.

3. The slope of a head-capacity curve also can be increased by trimming the impeller outer diameter at an angle, with the front shroud diameter being larger than the back (hub) shroud.

0907_pumprebuild_fig114. Extending the impeller vanes further into the impeller eye can increase the slope of the head-capacity curve. An extreme version of this case is the addition of an inducer in front of the impeller. The naturally steep head-capacity performance of the axial flow inducer is then added to the flatter performance of the lower specific speed impeller. As one compares the NPSHr trend of an inducer-less impeller with an impeller fitted with a “standard” inducer and an impeller with specially engineered inducer, it will be noted that off-the-shelf “standard” inducers may lower the NPSHr only in the vicinity of BEP operation [Ref. 1].

Restriction orifices to modify pump curves. . .
In low specific speed pumps, where impellers already are very narrow and low-capacity or narrower impellers cannot be cast, capacity reductions can be obtained by using restriction orifices in the pump discharge nozzle. Refs. 1 and 2 contain illustrations that convey these points. In all instances, a CPRS can predict the anticipated or achievable performance change when the discharge is throttled with different size orifices. If the performance of a pump absorbing 1000 kW is improved by just one percent and power costs $0.07 per kW/hr, the yearly operating cost savings will amount to $6132.

Wear materials for improved energy efficiency. . .
Fluid processing industries and CPRS facilities have embraced the use of current generation composite materials in centrifugal pumps to increase efficiency, improve MTBR (mean-time-between-repairs) and reduce repair costs. One such material that has been used successfully by major refineries is a proprietary PFA, a carbon fiber composite with a uniquely low coefficient of expansion and superior temperature stability. As noted in Refs. 1, 3 and 4, this one particular high-performance polymer composite has replaced traditional metal and all previous generation composite materials in pump wear rings, throat bushings, line shaft bearings, inter-stage bushings and pressure reducing bushings. The properties of this particular product eliminate pump seizures and allow internal rotating-to-stationary part clearances to be reduced by 50% or more.

For good reason then, composite wear materials are included in the 9th (2003) and later editions of the American Petroleum Institute’s pump standard, API-610. The various application points are illustrated in Fig. 1.

0907_pumprebuild_pq31Only the low-expanding, high-temperature-capability, proprietary PFA materials have proven to eliminate pump seizures, provide dry-running capability and greatly reduce the severity of damage from wear ring contact. Users report freedom from pump seizures during temporary periods of suction loss, off-design operation, slow-rolling or start-up conditions. When the upset condition has been corrected, the pump continues operating with no damage or loss of performance. Conversely, when metal wear components contact during operation, they generate heat, the materials gall (friction weld), and the pump seizes. This creates high-energy, dangerous failure modes, which can result in extensive equipment damage and even the potential release of process fluid to atmosphere.

CPRS engineers know that correctly chosen proprietary PFA wear parts undergo only about 15% of the thermal expansion of certain other high-performance polymers. This is a very important distinction that contributes to the success of this engineering material. To re-state, properly applied and configured pump wear components (excluding sleeve bearings) made from this material are certain to reduce the risk of damaging expensive parts. This means reducing repair costs and mitigating safety and environmental incidents.

Moreover, reducing wear ring clearance by 50% increases pump performance and reliability through increased efficiency, reduced vibration and reduced NPSHr. The efficiency gain for a typical process pump is 4-5% when clearance is reduced by 50% [Ref. 5]. Minimized wear ring clearance also increases the hydraulic damping of the rotor, reducing vibration and shaft deflection during off-design operation. The lower vibration and reduced shaft deflection increase seal and bearing life and help users achieve reliable emissions compliance. This reduction in clearance also reduces the NPSHr on the order of 2-3 ft (~0.6-0.9 m), which can eliminate cavitation in marginal installations.

Field experience shows remarkable success when installing proprietary PFAs to achieve all of these benefits. For example, one refinery installed such wear rings and line shaft bearings to eliminate frequent seizures in 180 F condensate return service. These condensate return pumps subsequently have been in service for many years without failure. Another user improved the efficiency and reliability of two gasoline shipping pumps by installing proprietary PFA wear rings, interstage bushings and throat bushings. The shipping pumps also have been in service for many years without failure or loss of performance. Hundreds of other services and applications have benefited from properly selected composite wear components; they include pumps in light hydrocarbons, boiler feed water, ammonia, sour water and sulfuric acid.

0907_pumprebuild_pq41As usual, there are many ways to investigate the cost justification for upgrading to high performance proprietary PFAs in pumps. Note, in Table I, how a good justification incorporates the value of efficiency gains in a 1000 kW centrifugal pump, where clearance was reduced by one-third.

Based on calculations in Table I, a one-time incremental outlay of $3000-$1520 = $1480 returns $26,861 per year for seven years. The first-year payback ratio is $26,861/$1480—about 18:1. Using other parameters one could reasonably arrive at even higher payback ratios. Among these are the imputed cost of avoided fire incidents and the value of re-assigning freed-up workforce members to proactive failure avoidance tasks [Ref. 4].

This type of cost justification is straightforward and sufficiently accurate to determine the path forward. The CPRS should take the lead in selling his upgrade services by offering tentative cost justifications. Of course, the owner-operator must be responsive and provide suitable plant statistics, as available.

Documentation
If you are considering a non-OEM repair operation, keep in mind that a CPRS—a truly competent pump repair shop—should be able to produce many detailed, real-world engineering evaluation samples of its work. This documentation should indicate that the shop is following the Best-in-Class type of guidelines included in this article. Ask to see these samples, which may take the form of reports and/or case studies. When it comes to the proper maintenance and rebuilding of their critical equipment, reliability-focused pump users want information up front—not surprises later on.

Coming in Part III
In the next installment of this series, the authors provide specific assessment criteria for those considering entrusting their pumps to a non-OEM rebuilder.

Frequent contributor Heinz Bloch is well-known to Maintenance Technology readers. The author of 17 comprehensive textbooks and over 340 other publications on machinery reliability and lubrication, he can be contacted directly at: hpbloch@mchsi.com Jim Steiger is senior aftermarket engineer with HydroAire, Inc., in Chicago, IL. Telephone: (312) 804-3694. Robert Bluse is president of Pump Services Consulting, in Golden, CO. Telephone: (303) 916-5032.

References

1. Bloch, Heinz P. and Alan Budris, Pump User’s Handbook: Life Extension, (2006) Fairmont Publishing Company, Lilburn, GA, 2nd, Revised Edition, ISBN 0-88173-517-5

2. Bloch, Heinz P. and Claire Soares, Process Plant Machinery for Chemical Engineers, (1998) Butterworth- Heinemann, Woburn, MA. 2nd, Revised Edition, ISBN 0-7506- 7081-9

3. Bloch, Heinz P., “Twelve Equipment Reliability Enhancements with 10:1 Payback”, Presentation/Paper No. RCM-05-82, NPRA Reliability & Maintenance Conference, New Orleans, LA, May 2005

4. Bloch, Heinz P., “High Performance Polymers as Wear Components in Fluid Machinery,” World Pumps, November, 2005

5. Bloch, Heinz P. and Fred Geitner, Major Process Equipment Maintenance and Repair, (2006) Gulf Publishing Company, Houston, TX, 2nd Edition, ISBN 0-88415-663-X

 

About Hydro, Inc.

All photos in this non- OEM pump rebuild series, including the cover photo of the July 2007 issue of Maintenance Technology, have been provided by Hydro, Inc. (www.hydroinc.com). Founded in 1969, and headquartered in Chicago, IL, Hydro, Inc. is the largest independent pump rebuilder in North America, providing support for industrial, municipal and power generation plants around the world. In addition to its five locations in the U.S., the company maintains service or sales centers in Australia, Canada, Venezuela and India.

Continue Reading →

92

6:00 am
September 1, 2007
Print Friendly

Solution Spotlight: High-level power protection for oil exploration…Wanted: A More Dependable, Rugged Solution

0907_solspotlight1Oil services company, Allis-Chalmers Energy, Inc., knows first hand the time and financial ramifications of not having clean, uninterrupted power.

The inherent nature of oil drilling means that all equipment must be able to operate in extremely harsh conditions—that includes equipment that is powering the various oil extraction machines. If these machines aren’t receiving clean, continuous power, operations can come to a costly halt. Through experience, Allis-Chalmers Tubular Services found out that not all power protection systems are created equal.

One of the most important tools that Allis-Chalmers uses in oil extraction and exploration is an automated hydraulic tubular make-up or break-out system known as a tong machine. The tong machine is what handles the important task of coupling the tubing (the pipe through which the oil flows) and regulates the rotation of the tubeconnecting process.

Automated hydraulic tong machines operate in particularly harsh environments, especially when operating onshore, which is typically in a remote area without available AC power. In these cases, a generator is located inside an on-site vehicle to provide electricity. “The power coming from the generator is very polluted—full of spikes, noise and surges,” says Brad Guidry, assistant store manager and safety coordinator for Allis-Chalmers Tubular Services in Louisiana. “This unstable power leads to a significant amount of downtime. The ‘dirty’ power from the generator would send power surges and spikes through to the tong machine’s computer and destroy the circuit boards and internal power supply. We bought and installed a traditional power conditioner—a transformer-based unit. We picked one that we thought would do the job. We plugged that unit in and, though we thought we solved the problem, the result was—to our dismay—another blown computer.”

0907_solspotlight2“Finally,” notes Guidry, “one of our team members told us about Falcon’s UPSs. Falcon, located in Southern California, has a distributor next door in Plano, Texas. Phil Eddelman, owner of Digital Environmental Solutions, talked to us about the load, operating location and conditions and other requirements such as the back-up time we required. Phil was very knowledgeable about the Falcon UPS line and he suggested we install an SSG-500VA-1 On-Line UPS.”

According to Guidry, since his organization installed the SSG500s, it hasn’t experienced any downtime related to power surges or spikes. “In fact,” he continues, “these UPSs work so well that we don’t worry about the quality of the power, whether we are on or off shore.”

Falcon Electric, Inc.
Irwandale, CA

Continue Reading →

206

6:00 am
September 1, 2007
Print Friendly

Electrical monitoring & analysis…Multi-Function Transient Recorders Deliver Over And Over

Want more longer-term data for multiple users from less equipment? The benefits are big. The payback is real.

In the never-ending drive to get more from less, electric utility engineers are continually looking for ways to obtain more information from fewer pieces of equipment. To support those needs, innovative vendors are providing utilities and electric power users with devices that have greater capabilities at lower cost than earlier generation equipment. Transient recorders are one way manufacturers have expanded functionality well beyond simple electronic oscillographs.

As shown in Fig. 1, in addition to traditional sinusoid recording (for seconds of data), modern transient recorders provide long-term recording for power swings (for several minutes), continuous recording of minimal resolution of data (for days) power quality data, steady-state loggers (for weeks or months), power quality data, fault location and information for scheduling breaker maintenance.

Sinusoid data has been used for years as a tool for verifying the operation of the overall protection scheme from the PTs, CTs and relay to the breakers and carrier signals. Providing details on faults not only enables the protection engineers to detect faulty settings, relay algorithm problems, failed hardware and other issues to minimize time to investigate problems but also provide documented data for legal and regulatory requirements.

Demand for more data
Since the widespread power outages in the western United States in the midto- late-1990s, and, more recently, the August 2003 blackout in the Northeast, there has been an intense focus on the need for longer-term data. This has been reinforced with new requirements from NERC that calls for Dynamic Disturbance Recorders (DDRs) at select locations on the power system.

Since recording sinusoid data for multiple minutes would create files >10Mbytes, a method was needed to store power system data at reasonable speeds without sacrificing the information available from the data. Some systems limit the number of channels or calculated values that are stored for disturbance records. An excellent balance, however, can be obtained by storing subcycle phasor data on every channel along with RMS (or DC for appropriate signals) and a frequency channel. By storing triggered data at a 100 or 120Hz rate, slower phenomena such as power and frequency swings in the 0.3–15Hz oscillation range, out-of-step conditions, generator start up and other slower, longer-term problems can be diagnosed.

This type of information also can be useful in analyzing issues associated with large motor starts and other large load changes. Moreover, it can be very useful at generating plants where problems can take several minutes to grow to the point of causing a plant trip or other serious issue. Recording a disturbance record with the faster transient record is a good compromise for less powerful systems. But, to get the most out of such recorders, it is best to have dedicated triggers that focus on these slower phenomena. By offering triggers over a variety of bands with dedicated time constants, the system is able to differentiate between a real problem and the normal oscillations that occur every day.

The more advanced systems also take this a step further by storing the phasor data at a 25- or 30-Hz data rate continuously. This provides lower resolution data for distant faults that don’t trigger recorders or where triggers were misapplied. One possible drawback to this is the volume of memory needed at the recorder. In a 32-analog channel system, storing the RMS, phasor magnitude and angle and two frequency values requires 6Gb of storage for two weeks’ worth of data. This is easily accomplished using modern hard drives integral to the recorder. However, streaming this type of data to an external device requires a dedicated, reliable communication link that typically cannot be used for other systems without compromising this primary function.

Expanding capabilities
Another function available in the new generation of recorders allows them to be used as circular chart recorder replacements. Most substations had the old pen-based circular recorders that required monthly paper replacement and annual pen replacement. Typically, these provided a 15-minute average value of bus voltages for regulatory compliance requirements, were very labor intensive and of little value beyond steady-state values.

Newer systems are able to compute and store RMS values for not only the bus voltages, but the line currents, too. With the large capacity memories available now, storing 52 weeks of a 1-minute minimum, maximum and average value is a simple process. This provides superior resolution, more details beyond just steady-state values (max and min), and files can be analyzed via computer programs to compute the total time the system is operating within prescribed limits such as EN 61000-4-30.

In addition to computing steady-state RMS levels, the system frequency also can be recorded in the same type of long-term steady-state format. This can be used to replace strip chart recorders in control rooms at generating plants or control centers. Additionally, this information (frequency, Irms, Vrms, digital input status) is available on communication ports in industrystandard protocols such as DNP, reducing or eliminating the need for RTUs at some locations.

The new systems even provide the capability to create and use phase groups and line groups to compute a variety of parameters internal to the recorder. From phase groups, positive, zero and negative sequence values are computed, and voltage unbalance obtained. Combining a current phase group and a voltage phase group then provides the capability to compute Watts, VARs, Volt- Amps and power factor. These values are then used for triggering and logging steady-state values. This provides an overall measure of system loading and capacity for growth.

0907_transient1Driving forces
Over the past 15 years, there has been an explosion in the use of switch-mode power supplies in everything from PC and consumer electronics to PLCs and other types of industrial controllers. These, along with adjustable speed drives and many other devices, are creating an amount of harmonic pollution on the power system never before seen. To counter this problem, IEEE and EN have implemented limits on harmonic magnitudes at the point of common coupling (PCC). Some countries have taken this a step further and placed regulatory limits on harmonics at different voltage levels. Consequently, utilities need to know the level of harmonics present on the power system and where they are coming from.

“Flicker” is another phenomenon that has resurfaced after many decades of limited concern. This term originated from the way incandescent lights would “flicker” in intensity due to a modulation in the magnitude of the voltage. Human perception of flicker possibly could lead to headaches, reduced productivity and other personnel problems.

Caused by rapidly fluctuating loads, flicker is measured as an instantaneous, perceived short-term (Pst) and perceived long-term value (Plt). The Pst is a 10- minute value and the Pst is a 120-minute value that can be calculated from the Pst. As with other parameters noted above, some advanced fault recorders have the ability to compute and store a flicker value. Most use the IEC definition, which is based on 230V/50Hz and then interpolate the data to 115V/60Hz if necessary. Typically storing this on one set of voltages at a location is sufficient since this will propagate through transformers and affect an entire location.

Voltage unbalance is another parameter that can be computed and stored by advanced transient data recorders. Two different parameters can be calculated and stored—negative sequence unbalance and zero sequence unbalance. As the name would indicate, these are the ratio of the negative or zero sequence voltages to the positive sequence voltage measured as a percent (or fraction). These are excellent measures of the overall efficiency of the power system. Too high a value and there will be too many losses, costing the utility or consumer money.

0907_transient_21Extended data reach
As with many IEDs in a substation, the modern oscillograph is monitoring a variety of signals that can be used for several different applications. An example of extending the digital transient recorder data to newer applications is breaker maintenance scheduling. By monitoring all 3-phase currents through the breaker and the A or B contact on the breaker, three critical parameters can be tracked in a database—total number of operations, duty and total current. Additionally, data can be presented to maintenance personnel showing the duration of fault current for each individual operation. This capability clearly can flag a problem, if there is fault current present for a longer time on 1 or 2 phases, indicating a stuck pole (or an evolving fault). By using an additional analog channel, the trip coil current also can be monitored for every operation. This gives an excellent indication of the travel-curve and acceleration within the breaker so deteriorating trends in the mechanical portion of the breaker can be acted upon before a catastrophic failure occurs.

Most utilities have used some form of fault location tied to the data from their oscillographs. Early methods consisted of using the current and voltage magnitudes from the recorded data and correlating those numbers to specific locations from their fault studies. The new systems provide distance-to-fault data automatically. By using line models and impedance-based algorithms, an actual fault resistance is calculated and distance computed. Now, without ever having to look at waveforms, this distance value can be obtained by the personnel who need it, but who may not be experts in analyzing oscillographs.

Enhancing competitiveness
In this era of deregulation, re-regulation and competition, having a variety of data collected for different reports is critical to efficiently operate the overall T&D system to attract and keep customers. By tracking voltage dips and surges over time, it is possible to identify circuits to invest in to improve performance and to show a before-and-after quality. One of the newest applications for power system data is recording or transmitting Synchrophasor data. This consists of using the 3-phase voltages and computing and precisely time-tagging the positive sequence phasor data. Using this data over a wide area, utilities are able to detect instabilities before they become a serious problem.

0907_transient2In conjunction with the Synchrophasor capability, the system must have the ability to accurately time tag this data to better than 30μsec. However, to decrease error due to time tagging, it is better to minimize this. The more advanced systems have time-synchronization accuracy in the range of 50–100 nanoseconds.

Independent of the Synchrophasor capability, the need to analyze data from a wide geographic area—as witnessed by the Northeast blackout—it is critical that all recording and reporting devices be synchronized. This requires some source of UTC (Coordinated Universal Time) for all these distributed devices. Most common today is the GPS clock. These clocks typically output an IRIGB or a pulse-per-second signal. With this technology, it is easy to ensure that all devices are synchronized to +/-1msec. Additional considerations are UTC, local time and daylight savings time. With the merger of utilities there are many systems that now span several time zones and it is best to keep all devices on the same time. Using UTC (formerly GMT) is an easy way to do this, but isn’t instantly intuitive to operators and other personnel as they look at the time tags. Typical practice at most utilities has been to settle on leaving all devices on standard time—no adjustments for daylight savings time—and if across multiple time zones, they will choose one and keep it there.

For unmanned substations, the need for panel meters has been minimized. By taking advantage of the new technology in fault recorders and many other IEDs, they can be completely eliminated. With a laptop or PC, any analog input and many computed values can be displayed in real time either locally in the substation or remotely via modem or TCP/IP. RMS current and voltage, Watts, VARs and Volt-Amps and THD are just some of these parameters.

Not only do modern transient recorders communicate to vendor-specific software for traditional date transfers, they now can interface to SCADA systems using DNP3.0 or other standard protocols. Once again, this saves on space, equipment, installation and wiring costs, not to mention engineering and drafting time when designing a new substation. With such a powerful interface, analog values, digital status and frequency magnitudes are available in real time, eliminating the need for analog transducers and reducing channel counts on the RTU.

Capture the benefits
When all is said and done, a modern transient recorder not only complies with the latest requirements from NERC, it also provides data for a variety of functions that, in the past, were generally available only with discrete devices. For example, sales and marketing organizations can benefit from power quality and historical data to provide high levels of customer service and attract new businesses. Maintenance departments can use the data for optimizing breaker maintenance schedules, tree trimming or line upgrades and other regular tasks. Protection and relay engineers can use the traditional oscillograms to analyze faults and verify operations and line models, and use the newer disturbance data for stability analysis. Planning can use the data for load growth and equipment sizing. Additionally, the system can be an interface for SCADA personnel.

In summary, a modern transient recorder is a valuable tool that can replace several discrete devices. Its advanced support software provides answers, not just raw data. These systems, when tied to expert systems, provide a quick summary of all faults detailing the faulted circuit and phase(s), magnitude and duration of fault and what protection equipment operated. Location of faults, voltage quality information and data for optimally scheduling breaker maintenance are additional types of answers that are giving utility engineers more for less.

Shane Haveron is a product manager with AMETEK Power Systems & Instruments, based in Lisburn, Northern Ireland. E-mail: shane.haveron@ametek.com

Continue Reading →

2

6:00 am
September 1, 2007
Print Friendly

Uptime: Sometimes It’s The Little Things

bob_williamson

Bob Williamson, Contributing Editor

Many businesses, facilities and plants today are involved with (sometimes obsessed with) “improvement programs.” Plenty of these “programs” seem to have come and gone over the decades, only to return with morphed identities or new buzzwords.

Each time a “new program” emerges, organizations put considerable time and resources into addressing performance problems with tools and terminology that the program’s proponents promise will revolutionize business. All the while, many causes of poor equipment performance and equipment downtime continue to be overlooked— they’re not glamorous enough to make the radar screens of most hot “new” improvement initiatives. (No, I am not beating up on a Six Sigma or Lean this or Lean that. Typically, these popular improvement methods involve “higher level” problem solving.)

What I am concerned about is the fact that many of today’s equipment-related losses are quite preventable using a mix of common sense, time, minimal resources and some experienced coaching. Let’s call these the “fundamentals,” the basics for addressing equipment performance losses.

The difference
Facilities maintenance resources often are spent addressing chronic problems in a reactive manner that leads to higher and higher maintenance costs. Manufacturing flow interruptions are most often caused by equipment-related losses (downtime, speed and yield) in an equipment-intensive operation. Improvement programs coupled with reactive equipment repairs can put an added burden on already limited maintenance resources. Sustainable gains can be extremely difficult to achieve. Here are a few real-world examples from different companies on how the little things can make a big difference.

Case example #1…
Despite continuing heroics to meet production quotas, plant leadership and floor personnel were frustrated by equipment performance. The target of their frustration was an integrated process of about 10 machine sections, a constraint in the manufacturing process closest to the customer and the most troublesome processing line in the department. Some of the products that ran on the line would jam or not feed correctly. At times, the case packer would jam and destroy packing boxes. Products would fall out of the machine and jam up the chains and sprockets. Not all of the PLC control screens used in the operation of the line had a STOP button displayed. The need to return to other screens to shut off the machine, in turn, caused time-consuming delays. Because of this PLC difficulty, the emergency button frequently was used to stop the machine during jam-ups, which resulted in additional out-of-cycle delays and other problems.

The net result of all of these little things was lost production, damaged product, lower production yield, high maintenance trouble calls and lots and lots of fixing and tweaking just to make things run right for a short time. All of the major problems observed, recorded in the production downtime tracking and entered into the maintenance logs targeted only two of the 10 sections of the processing line. The solutions were straightforward.

Case example #2…
A relatively new automated cutting and assembly process would not cut properly. This led to quality defects or destroyed product. At times, these improper cuts would cause damage to the shear section by exerting too much pressure in the wrong direction. Shear blade screws were found to be loose and worn out, unable to hold torque. An oil film trapped behind the blade created a hydraulic pressure zone, preventing the blade from properly seating metal to metal. Long runs of consistent cuts were rarely possible. These seemingly little things contributed to extensive production delays, lost yield and increased cost per unit produced.

In both of these case examples, BASIC CONDITIONS for proper operation were not being met. Furthermore, until the BASIC CONDITIONS were met, the equipment and the processing lines would NOT be reliable…no matter the title of the “improvement program”…no matter how good the “5S” workplace organization and orderliness program…no matter how good the Six Sigma process…no matter how good the operatorperformed maintenance. Until these BASIC CONDITIONS were established and sustained, the equipment would be labeled problematic. Moreover, problematic machines, equipment and processes generate lower output at higher costs resulting in lower revenue and more plant floor personnel frustration.

Case example #3…
Product jam-ups were common on the discharge side of a high-speed manufacturing line. These events caused line shutdowns, lost product yield, costly quality problems that had to be sorted out and delivery delays. There were no less than 20 individual adjustments on the discharge mechanism to get everything aligned for proper highspeed delivery. To avoid jam-ups, the operators often would run the high-speed line slower. (What good is a slow-running high-speed line?) Another seldom-used high-speed line within 50 feet of the problematic line had half as many adjustable elements—it was simpler and, based on the operators’ testimony, less troublesome. When asked why the trouble-free line had a simpler discharge mechanism than the problematic one, the answer was: “I don’t know. It’s always been that way.”

After minimal discussion, a mechanic with a few wrenches and a couple of C-clamps moved the simpler unit to the problematic line for a trial. Production records were set! Now, the simpler delivery element is common to several high-speed lines instead of just the seldom-used one.

0907_uptime1Case example #4…
The highest repeat maintenance trouble calls in the facility were on five equipment items. Upon analysis, we discovered that these five items were identical, just installed in different locations. The major problems among the five were the same. A root cause analysis pointed out simple solutions related to proper lubrication, correct bearings for the application and drive belt tension adjustments. By addressing the root causes of the chronic problems the maintenance trouble calls ceased, freeing up maintenance resources for more preventive maintenance tasks.

The fix
Sometimes it’s the little things that CAUSE big problems. Sometimes it’s the little things we do that will ELIMINATE big problems.

In each of the case examples cited here there were NO comprehensive preventive maintenance (PM) tasks that addressed the problematic sections of the equipment. Nor was sufficient time allotted to identify the causes of the little problems. For instance, shift production output was much more important than taking time to properly address problem causes, and fixing things fast became the goal. In these cases the maintenance planner became the maintenance dispatcher—with no time for planning or scheduling.

The good news is that management in our four case examples eventually bit the bullet and allowed sufficient planned shutdown time to solve their respective problems. In the production operations in each case, a cross-functional team comprised of mechanics, electricians, operators, supervisors, technicians and others was formed. Over several days, they learned the basics of equipment care and upkeep, dug into their equipment data and searched for the root causes of the chronic problems. They developed, tried and refined solutions to the problems in several hours or less. Since then, they have been setting production records and achieving sustainable, consistent production goals. More importantly, they all have learned that problems can be eliminated by taking the time to address the causes—building, in the process, the foundations for a new work culture. These teams truly became pockets of excellence in action.

After more than 15 years studying NASCAR race shops, talking with their leaders and crews, and studying race team pit crew methods, I see over and over again the basic principles that make them competitive in the pursuit of 100% reliability: Go slow to go fast…Do it right the first time…Speed is a result of doing things right.

The thought here is that it takes longer to do it over again than it would take to DO IT RIGHT to begin with. Thus, if going slow to make sure that the task is done correctly is necessary, so be it: Sadly, the consequences of not doing things right can be costly—and often dangerous. Once you figure out HOW to do it right consistently, figure out how to make it better and faster. Never, ever, though, compromise what is right for the sake of speeding things up.

Another principle I have learned from NASCAR teams: The more complicated the mechanism the more chance for problems and the more variables you have to control, whether equipment-related or shop productivity management-related. These race teams know that “simpler is better” when trying to solve problems and improve performance. I call that “world-class simplicity.” They also make extensive use of detailed checklists in all stages of building and setting up a racecar to communicate and to ensure everything is done right and on time; it’s a standard practice in all competitive race shops.

The focus
Sometimes it’s the little things we should focus on in our operations to eliminate problems, free up maintenance resources and reduce costs. What’s wrong with taking time to look at pesky little things that keep cropping up over and over again and working with those closest to the situation to help figure out what causes them? What’s wrong with providing some higher-level equipment troubleshooting expertise in the form of an engineer, a process technician, a mechanic or a consultant to work with those closest to the problems? What’s wrong with looking for the “simple solutions” to seemingly complex problems? Finally, what’s wrong with learning along the way to solving the problems—proper operation, proper maintenance, proper setup, proper adjustment—to sustain the gains?

Sometimes it’s the little things that get overlooked because we are focused so intently on new programs or major activities to improve plant performance. As a result, we forget the little things that affect the necessary “basic conditions” for equipment performance and reliability improvement… the little things that, if addressed, can yield huge gains, not only in output and lower costs but also in developing a new work culture of teamwork focused on common goals. It all combines into a prescription for BIG success in today’s increasingly competitive marketplace—whether your organization is competing for sales or competing for top-notch talent in an era of skills shortages.

Continue Reading →

Navigation