Archive | March


1:46 am
April 22, 2014
Print Friendly

Fighting Clogs Efficiently at a Wisconsin Wastewater Station

25 March 2014



The growing popularity of single-use towels, baby wipes and other non-woven ‘convenience’ products has become a maintenance nightmare for water-treatment operations. It should scare any operation that requires clean water.

The scenic community of Waupaca, WI, is spiritually connected to clean water.

Not only is its name a translation of a Native American term meaning “clear water,” its wastewater facility is known as the Crystal River Lift Station. So community leaders were well aware of the irony when this facility became notorious for its three pumps becoming clogged with debris after their installation in October 2002.

The 360-GPM Crystal River Lift Station is among the largest of Waupaca’s 12 sewage- pumping stations. It serves more than 10,000 residents and 3600 connections whose wastewater flows to a 1.5 MGD regional treatment plant operated by Waupaca’s utility. After undergoing treatment, the effluent discharges into the Waupaca River. It was not uncommon, however, for the Crystal River Lift Station’s wet-well to accumulate a thick mat of disposable cleaning heads, various types of wipes, towels, grease and even a band of underwear elastic. The array of crud originated from the county jail, nursing home, middle school, elementary school, hospital and other connections upline from the station.

“We experienced clogs as often as three times per day,” says Jeff Dyer, wastewater team leader. “The old pumps cavitated badly and weren’t reliable. They simply weren’t engineered to operate efficiently in that environment.”

Waupaca’s utility faced a growing problem for the wastewater treatment industry: the increased incidence of ragging due largely to wet-impregnated and dry-electrostatic wipes popular for household cleaning and personal hygiene use. Unlike traditional woven material, these often porous sheets and cleaning heads are manufactured from polymer fibers or film. Manufacturers market many as single-use “disposable” products, which the general public often mistakes for “flushable.”

When state lawmakers considered a ban on this new generation of products, trade organizations reacted. A 2008 report by the U.S.-based Association of the Nonwoven Fabrics Industry, INDA, called attention to how wipes that pass through residential sewers should flow equally through the wastewater collection lines and be compatible with accepted wastewater treatment plant operations. INDA and its European counterpart ENDA have since drafted guidelines that define what constitutes a “flushable” consumer product. The National Sanitary Foundation sponsors an independent validation service to test and certify a consumer product’s flushability.

Although convenient tissue-like wipes rarely block residential sewer lines, they can strangle traditional lift-station pumps as their residual material entangles pump impellers and impedes or clogs intakes. Federal legislation that reduced the flush volume for residential plumbing fixtures to 1.6 gallons has only aggravated the problem for utilities. Many operators foresee the problem becoming as potentially serious as when disposable diapers first reached the market decades ago. Diapers, however, were large enough to block residential lines before they reached the street, which created work for many plumbers at homeowner expense. Consumer news reports and word of mouth eventually led the general public to recognize the difference between a disposable and flushable diaper.

Convenience products, inconvenient clogging
At the Crystal River Lift Station, the fibrous waste from these products caused recurring overloads that clogged the recessed impellers of the facility’s former 10-HP pumps. Each clog required that a team from the four-man Wastewater Group within Waupaca’s Public Works Department be dispatched to clear the pumps. Blockages in the 26-ft.-deep wet-wells presented an inherently unhealthy environment, further complicated by seasonal weather challenges. The station’s repeated blockages and call-outs created schedule intrusions and added expense. The cost for a two-man crew sent to clear the pumps, for example, doubled to more than $31 per man-hour, including benefits. Typically, more than one hour was required to restore the pumps to operation. And if proactive monthly cleanouts exceeded the capability of the City’s Vactor truck, an outside contractor was called in with a more powerful unit that cost up to $1500 per cleaning.



Preparing to install the N pump, workers attach an adaptor for use with an existing rail-type mounting system.

When the utility’s aging pumps eventually needed overhaul or replacement, the Wastewater Department favored the latter. Several types of pumps were under consideration when a territory manager for Xylem, the manufacturer of Flygt brand pumps, responded to the utility’s inquiry. The discussion about the recurring clogging eventually focused on a Flygt N pump as a promising solution. (Editor’s Note: N-pump technology has since been incorporated into Flygt’s line of Experior products.)

To demonstrate the reliability of N-pump technology, Waupaca’s Public Works Department was offered a “try and buy” opportunity in 2008: A 10-HP unit would be installed and operated on a 60-day trial basis. If it ever clogged, the pump would be removed, without debate, at no cost. Installation was simplified with the use of the existing pump’s rail-type mounting system. The swap met expectations by performing flawlessly during the trial. In fact, the trial pump was set to operate in permanent lead-pump mode instead of the normal one-third operating sequence.

Since the pump was always operating, the trial period subjected it to a continuous 180-day operating test. During the test, the existing companion pumps continued to clog. This led the utility to procure a second N pump in 2009, and a third in 2012. The three 10-HP N-3127 pumps now operate in continuous sequence, with run times averaging 1.4 hours each during 11 cycles on a typical day.

“The reliability of the replacement pumps has been excellent,” reports Dyer, who adds that the facility’s “day-and-night emergency call-outs in all types of weather are no longer a factor.” MT&AP

Designed with Debris-Laden Flows in Mind

The Flygt N pump was specifically designed to handle the growing challenges of today’s higher debris-laden wastewater flows. Its improved efficiency derived largely from a self-cleaning, semi-open, backswept impeller with a horizontally positioned vane. The design contributed to energy savings by eliminating the drag imposed on earlier pumps from debris build-up on recessed impellers. The N pump’s hydraulic design eased the passage of solids while self-cleaning the impeller vane leading edges with each revolution. By eliminating impeller fouling, the pump prevents the steady build-up of fibrous material that can otherwise impose drag and compromise operating efficiency and energy use. The Water Environment Federation (WEF) recognized the pump’s engineering features with the Collection System Innovative Technology Award for 2011. N pump technology and its adaptive functionality have since been combined in what the manufacturer now designates as its Experior line.


1:41 am
April 22, 2014
Print Friendly

Forward Observations: Shop Goes To College

25 March 2014

Rick Carter, Executive Editor


rick_carter_thumbMatt Crawford’s excellent 2009 book Shop Class as Soulcraft offers an impassioned case for a return to the values once represented by public-school shop class. I read it recently thanks to my wife, who noticed it in the library of the high school where she teaches—a school, like many others in the U.S., that closed its first-rate shop program.

The book relates how Crawford’s circuitous career (he went from electrician to writer to motorcycle mechanic) taught him the value of hands-on work, and why it all began with shop. His opening snapshot of the afterlife of the lathes, table saws and other equipment that was discarded “when shop class started to become a thing of the past,” then offloaded to secondhand markets across the country, is a sobering symbol of a larger loss for America’s youth.

Shop was not a universal career-starter, of course, but it brought value by introducing a non-college option, something most of today’s public-school students, and many industrial workers, have missed. The good news today is that this void has not been ignored: The nation’s two-year community colleges are helping to fill it by re-imagining shop’s original mission as career training, shaped by and for manufacturers.

Educator and former MARTS presenter Mark Combs has been a part of this effort since 2008, working with manufacturers and faculty to create courses and schedules that accommodate the needs of both workers and community-college students interested in industrial careers. It began for Combs when he was Program Manager for Business Training at Parkland College, a two-year school in Champaign, IL. The nearby Kraft Foods operation asked if the school could train the plant’s workers to become maintenance technicians. The request led to a partnership with Kraft that exists to this day.

At the time, Parkland’s Industrial Technology program lacked the resources to meet Kraft’s needs, so Combs enlisted other local manufacturers with similar needs to join a training partnership. He then applied for and received a grant from the U.S. Department of Labor. “This allowed the college to get new equipment and hire people to help train both incumbent workers and college students,” says Combs. The manufacturers also donated equipment, manufacturing expertise, served on curriculum advisory boards and covered half of their employees’ tuition. The other half was covered by the local workforce-investment agency.

Parkland’s courses filled quickly: 35 incumbent workers at first, says Combs, along with 45 full- and part-time college students, “most of whom were unaware that manufacturing employment paid so well and actually had career opportunities.”

Not everyone completed the program, but those who did went on to well-paying positions with Kraft or the other partners.

Though the original grant ended in 2012, it was designed so enrolled workers could continue training afterward. The last who signed on under the original funding will finish in 2015.

Combs, who now teaches part-time at Parkland, says the school has become an established training source for local manufacturers. “We have an Industrial Maintenance Technology Certificate for technicians that did not exist before, and over $750,000 of equipment purchased with grant money that is used in classes to help train maintenance technicians.” Those classes are offered on a regular basis, he says, and people are enrolling in them.

A similar situation may be underway at a community college near you. According to Combs, you’ll probably find the classes and programs there that can help train your workers. If not, he says, most faculty and department chairs are eager to hear what’s needed. These schools are aware of the skilled-labor shortage, and want to help address it. All you may have to do is ask. MT&AP


1:39 am
April 22, 2014
Print Friendly

Uptime: Hourly Compensation Systems — Does Yours De-motivate?

25 March 2014
Bob Williamson, Contributing Editor

bob_williamson_thumb_thumb_thumbMoney motivates us—or does it? When we receive higher pay, does it compel us to improve our performance, be more productive or become more loyal? Not necessarily. The perception that our pay is sub-par, however, can be a serious de-motivating factor that affects both performance and productivity.

In the workplace, employees are subject to a wide variety of job classification systems and pay grades that guide the amount of monetary compensation they receive. While some pay rates are negotiated through collective bargaining or are the product of compensation studies, most are based on business needs, comparable job roles, responsibilities and rates in the region. In an era of growing skills shortages, we need to ask: “Is our hourly compensation system a de-motivator? Does it discourage current employees and chase away the best and brightest we need to attract?”

Let’s explore the typical hourly compensation of maintenance employees to see how pay can de-motivate, and what can be done to change this dynamic. Consider the following real-world example from my own archives:

Happy and not-so-happy campers
The XYZ Manufacturing Co. (not its real name) was a preferred employer in the area. A well-established operation, it was staffed with a highly experienced 165-person maintenance workforce for everything from facility and utilities to assembly and production processes. XYZ had two primary maintenance job classifications: Mechanical Maintenance and Electrical Maintenance. Maintenance workers were slotted into one of these classifications from the start of their seniority in the maintenance department.

  • Electrical Maintenance work included power distribution, machinery electrical systems and instrumentation/controls.
  • Mechanical Maintenance work included welding and fabrication, machining, lubrication/oilers, fork trucks and machine repair.

Each job classification had the same four negotiated pay grades and hourly rates. The lowest pay was for new hires and trainees, the highest was for the most senior maintenance employees. Progression from entry-level to top pay was basically automatic unless penalizing behaviors or habits occurred, which was rare.

After 15 years, all 165 maintenance employees were at the top pay grade and receiving top hourly pay, including those who hired on within the previous five years. While the most senior maintenance employees received top pay, those with lower seniority quickly caught up with them. This made XYZ’s compensation system easy to manage. Annual across-the-board raises and contract negotiations were relatively straightforward. But the monetary increases affected all maintenance employees exactly the same way because all were at top pay.

The not-so-happy-campers were those employees who performed highly skilled instrumentation/controls and machine-repair work. They were stuck at the top pay grade along with those in markedly lower skill-level roles. Electricians, lubrication/oilers and fork truck mechanics, for example, were paid the same as machine repairmen and instrumentation/controls technicians.

The electricians, oilers and mechanics felt that pay was NOT an issue: They could focus on their work. However, the machine repairmen and instrument/controls technicians—whose jobs required continual skills and knowledge upgrades every time new automated machinery was installed in the plant—were highly de-motivated by their pay and disgruntled about their workloads.

Breaking the camel’s back
De-motivation among the top-skilled machine-repair and instrumentation technicians came to a head when XYZ’s primary processes were upgraded in a major engineering project. “Why should we master new technologies again and again,” they asked, “while our fellow maintenance workers get the same pay we do and don’t have to continually learn new technologies?”

Further complicating matters, XYZ was unable to find new, highly skilled machine-repair and instrumentation/controls technicians, despite the fact that new hires would start at the top pay grade. The company was unable to attract the best and brightest it needed to maintain, calibrate, troubleshoot and repair the new automated manufacturing technologies. In a practical sense, something had to be done to recognize that certain maintenance jobs required higher, frequently changing skills and knowledge. Stalemate.

The skills shortage was the real eye-opener for both the personnel department and the maintenance employees. Though XYZ’s top-performing maintenance employees ranked as senior personnel, they started to look for work elsewhere. Their skills, knowledge and experience were increasingly in demand.

Pay-for-applied-skills to the rescue
It was necessary to find a way to “stratify” the maintenance job-performance skills sets required at XYZ. This had to be done in a way that would provide advancement opportunities for all who were interested without penalizing anyone. A new, opt-in training and advancement program was created, which proceeded as follows:

A detailed duty-task analysis was performed to identify job-performance requirements (skills and knowledge) for current as well as higher-level skilled job roles (machine repair and instrumentation/controls).

Three new pay grades topping out at more than a $2.00-per-hour increment were identified with specific requirements for advancement. The new pay grades reflected opportunities for crossover skills where instrument/controls personnel would acquire certain critical mechanical skills, and machine-repair personnel would acquire certain critical electrical/electronic skills for a “multi-skill maintenance” approach.

Aptitude and ability “assessments” (not “tests”) were required to begin training, and for all higher-level job classifications. Assessments included mechanical aptitude, learning ability and computer literacy, plus reading, writing and basic math. Individual scores were to be reported only to the person being assessed and the training manager.

The existing skills and knowledge of those entering the training program were identified. Maintenance employees wishing to train for higher-level jobs then reviewed the detailed duty-task analysis of the higher-level skills and knowledge as a “self-assessment.”

Also, several highly skilled machine-repair and instrument/controls and employees were selected and trained to be instructor/trainers. Their first assignment was to construct training devices with actual plant equipment that would duplicate the critical new-technology machines and controls and some essential prerequisite skills. Following their “self-assessments,” employees were asked to demonstrate their abilities to perform the identified duties and tasks on actual plant equipment or on the training devices. This performance demonstration was observed and monitored by the job-specific instructors/trainers, a supervisor and another hourly maintenance person.

Training began on critical skills for improving existing manufacturing processes, as well as those needed for emerging technologies. A variety of approaches were used, from self-study to classroom and vendor/OEM to on-job coaching. All training conformed to the duty-task analysis. Written testing was not used to verify job skills and knowledge. The same hands-on performance demonstration process—according to the duty-task lists—was used after training to assure proper job performance.

Pay no longer a de-motivator
The new job classifications, training and compensation at the XYZ Manufacturing Co. reflected the top priority needs of the plant. Pay was no longer a de-motivator for top-skilled maintenance employees. Maintenance-job roles and performance requirements were stratified.

While everyone had the opportunity to participate at the higher skill levels, many opted for improving their skills and knowledge with regard to their current job roles. Those who had reading, writing or math deficiencies for their job roles received individual tutoring and participated in skills-improvement programs.

Within a few months of completing the first wave of training and qualification, a major, chronic problem area in the plant stopped having problems. Machine performance improved and breakdowns rarely occurred. Engaged with their new work, XYZ’s former not-so-happy campers were celebrating—along with the labor union and plant management.

As aging Baby Boomers exit the workforce and younger workers try to fill their shoes, it may be time to rethink your own hourly maintenance compensation system. MT&AP

Robert Williamson, CMRP, CPMM and member of the Institute of Asset Management, is in his fourth decade of focusing on the “people side” of world-class maintenance and reliability in plants and facilities across North America. Email:


1:35 am
April 22, 2014
Print Friendly

Don’t Procrastinate…Innovate!: Minute Maintenance, Part 2

25 March 2014

Ken Bannister, Contributing Editor


ken_bannisterLet’s recap from Part 1 (pgs. 12-13, MT&AP, Jan. 2014): Minute Maintenance is the pursuit and implementation of innovative proactive maintenance methods, processes, techniques and tools designed to reduce or eliminate non-value-added (waste) maintenance activity to produce an efficient, effective maintenance result measured in minutes, using a lesser skill-set requirement than that needed to perform a repair.

Major interventions (i.e., overhauls and repairs) require time and high skill levels, and almost always result in major asset dependability (reliability, availability and maintainability) and production-downtime losses. Minimizing loss (waste) is achieved by understanding and identifying failure onset in a timely manner through the recognition of an asset’s current condition compared to its optimum condition so that only minor intervention is required to assure asset dependability with little or no downtime loss.

As an asset progresses through its life cycle, it can be subjected to many external influences that can cause and/or accelerate component failures. These influences include temperature (heat, cold), vibration, contamination (water, dirt), neglect and abuse—all of which conspire to produce premature wear and a corresponding requirement for major maintenance. Recognizing and managing these influences can significantly increase an asset’s reliability and service life.

Machine designers are aware they have little control over the external influences or “ambient condition factors” to which their equipment will be subjected. To compensate, they design equipment with consumable devices intended to act as part of the machine system and, more importantly, as a “tell-tale fusible link” to protect major components and systems. We know these “devices” as lubricants, filters, belts, couplings, fuses, adjustment/calibration mechanisms, and others. Intended as the “weakest link” in the design, they’re simple to assess and correct and, as such, must figure in our proactive minute-maintenance approach. In turn, we can build a simple and effective preventive maintenance-check program around the referenced consumable devices and external influences. Consider the following V-belt example.

A motorized, belt-driven fan unit
V-belt drive systems transmit power at a defined number of revolutions per minute (RPM) from a motor-driven “drive” sheave pulley to one or more “driven” sheave pulleys attached to, in this case, a fan shaft.

A V-belt is designed to “wedge” itself into the v-shaped sheave groove and ride with full belt-side contact at the top of the groove, leaving a substantial gap between the bottom of the belt and the groove valley. Worn or non-matched belts that ride lower in the groove (known as differential driving) can eventually bottom out and polish the groove valley and should be replaced quickly.

To transmit power efficiently, one of the sheaves employs an adjuster mechanism to allow the belt(s) to be tensioned to a point that under load will “slip” between 1% to 3% and permit intentional creep and release from the wedged position in the groove as the pulley turns. If tension is below 1% (too tight), the belt won’t release correctly and, consequently, generate frictional heat; if over 3% (too loose), the belt will slip too much and start to “dance,” creating rubbing friction in the sheave, raising the temperature and causing premature wear of both belt and sheave.

Checking for slip is simply a matter of using a handheld strobe light to check and calculate the RPM speed difference between the driver and driven pulley. For example, if a 1750-RPM motor is used with 1:1 ratio sheaves, the driven pulley should be running between 1700 and 1730 RPM when tensioned correctly. If not, it’s in a no-go state requiring immediate attention. A correctly tensioned belt running on an unworn, correctly aligned sheave pulley is designed to return an operational efficiency close to 97%.

Most V-belts are manufactured from an elastomer that encases longitudinal rows of polyester or Kevlar internal-tension members. During power transfer, belts are subjected to fatigue-causing stresses that eventually lead the belt-tension members to fail. But, provided the belts operate at a temperature less than 120 F and are installed correctly, they can be expected to deliver 15,000 hours or more of belt-life.

High operating loads with large fans and motors require multiple belts to transmit power with minimum energy losses. These belts must be matched if they are to be tensioned successfully. Matched belts are often purchased in sets of two, four, six, etc., that are manufactured from the same batch of rubber. If a system is designed for six belts and only five are used, it will be under a high operating load and surpass the belt-load design factor—leading to overheating, inefficiency and premature failure. Belts should be visually inspected regularly to ensure all are in place and that they are matched.

Misalignment, in both offset and angular form, is a major problem with belt-driven systems. It causes a belt’s tension members to flex sideward and vibrate, creating additional stress. When a misaligned belt enters into the sheave groove, it “rubs” the sheave wall, raising the belt temperature through frictional heat that results in rapid wear of both belt and sheave. Precision alignment of driver/driven systems using laser or reverse-dial methods is a must to reduce heat, wear and energy loss. Sheave wear is easily checked using a $10 sheave profile gage. The “tooth” profile is placed in the sheave groove and a flashlight is shone from behind. If more than 1/32” of light (wear) is evident, a no-go state exists, requiring replacement of the sheaves and belts.

Once the motor and fan are aligned and all fasteners torqued correctly, the driven system will run quietly with virtually no vibration present in the motor. When this is achieved, the motor fastener bolts and frame tension adjuster nuts can be line-painted in position, with a check line across the fastener onto the fastener plate. If the fastener becomes loose and slackens off, the painted lines will not align. This will indicate a no-go state and quickly allow the problem to be noticed and arrested.

Taking this approach and building a first-alert PM based on the equipment’s weakest link helps us compile a checklist like the one shown below, that will identify, in minutes, a no-go state (exception-based maintenance) requiring a skilled intervention.

Minute Maintenance: Belt-Drive Assembly Checklist

  •  Check the check box against the task only when a no-go exception is found.
  •  Using a strobe light, check that driven pulley speed is between 1700 and 1730 RPM.
  •  Using an IR thermometer or camera, check that each belt temperature is < 120 F.
  •  Check that all motor painted fastener alignment check lines are aligned.
  •  Check that there are no dancing or heavily vibrating belt(s).
  •  Check that all belts are matched for size and batch numbered.
  •  Check that all belts sit in a similar position, flush with the outside sheave diameter.
  •  Check for smell of burning rubber.
  •  Check for visible signs of abraded rubber around the sheave pulley.
  •  When machine is not running, open machine-guard inspection window and check all sheave grooves using a groove-profile gage and flashlight for <1/32” wear (LOCKOUT required).

Performed on equipment that is running (with exception of the last task), this simple-objective checklist can be completed in less than 10 minutes, by a minimally trained non- or semi-skilled individual. Any no-go finding is to be immediately acted upon by skilled personnel. Good luck! MT&AP


11:14 pm
April 21, 2014
Print Friendly

The Manufacturing Connection: Reliability and Asset Performance For Profit

25 March 2014

Gary Mintchell, Executive Director


garymintchellThe one point you will hear from me as I set the direction of Maintenance Technology & Asset Performance magazine is that everything we work on—whether through division management, plant management, reliability, engineering, maintenance or operations—adds to the economic value of our company. What we do has strategic value. Let’s act that way.

Surveying the information that has been provided in this space over the past decade or so, I notice a recent trend to include more than traditional maintenance product information by adding emphasis on reliability. This is a good thing.

Let’s consider the real challenge we have. It is not just fixing things. It is not even simply reliability, which can also be viewed as just fixing things. The real problem is optimum throughput of product over time. We call that asset performance.

There are many tools at our disposal to help us in our quest for optimum asset performance. Most now derive from digital and computational technologies. Certainly, today’s professionals must be thoroughly proficient with digital networking and all the diagnostics that are available. Controllers and field devices are now information servers.

Using all this information becomes crucial. The tools for digesting, analyzing and presenting it may seem to be the same you have used for years. These tools, however, have also changed over time so that if you are not using the latest versions, you could be missing out. Computerized Maintenance Management Systems (CMMS), Enterprise Asset Management (EAM) and Manufacturing Execution Systems (MES)/Manufacturing Operations Management (MOM) applications have existed for years, but their powers have grown significantly. Better check yours.

MESA International is an association for companies and individuals who develop and use MES solutions. Its long-running “Metrics That Matter” surveys have consistently shown that these software tools can help improve performance. This year, LNR Research conducted and analyzed the survey in collaboration with MESA. A recent blog post,, discusses preliminary results of the survey. Two of the results concern maintenance specifically: Average Annual Performance Improvement of Maintenance was 14.9%. Average Annual Performance Improvement of Compliance was 18.5%.

The top metrics for maintenance include: Percentage Planned vs. Emergency Maintenance Work Orders and Downtime in Proportion to Operating Time. According to MESA, survey results indicate that “leading companies, especially those in asset-intensive industries, have learned that by focusing on improving maintenance metrics they can prevent expensive downtime and keep operations running at peak efficiency and safety. Enterprise Asset Management (EAM) software applications in combination with real-time condition monitoring information coming from MOM and Industrial Automation applications are enabling companies to operate in a more predictable fashion instead of a reactive/disruptive fashion.”

Top metrics in the compliance category include: Reportable Health and Safety Incidents; Reportable Environmental Incidents; and Number of Non-Compliance Events per Year. “It’s hard to argue that health, safety and environmental issues shouldn’t be at the top of everyone’s list for vigilance and ongoing improvement,” MESA concludes. “These types of improvements require ongoing cultural awareness, supported by constant visibility and actions to improve these critical business and social metrics.”

What would you like to see in Maintenance Technology & Asset Performance in print and online? Let me know what you would like to know. I welcome ideas and feedback. You can send an email, “DM” me on twitter @garymintchell, message me on LinkedIn, or check out the Maintenance Technology group on LinkedIn and send a note there. MT&AP

Gary Mintchell,, is Executive Director of Applied Technology Publications. He also writes at


11:08 pm
April 21, 2014
Print Friendly

For Top Performance, Know Your Critical Equipment

26 March 2014
Jane Alexander, Deputy Editor



Real-world benefits go well beyond easier decision-making.

It’s not possible or necessary for all equipment in a plant to receive equal attention. The key is to focus on the most critical assets—whatever that means to an operation. Applying the principles of asset criticality can facilitate your decision-making and generate a number of other valuable benefits in the process.

Your site’s critical-equipment determinations should be based on business goals and objectives, says manufacturing consultant and MT&AP Contributing Editor Bob Williamson. Identify your most critical assets and rank them on a scale based on risk (probability and consequences) or other impact they might have on business goals and values. In this process, you’ll also identify your least critical assets and those somewhere in the middle. “Focused improvement on the most critical few that you ultimately move into a ‘maintenance fast lane’ will lead to enhanced performance,” says Williamson, “and possibly free up reactive maintenance resources to perform more planned/preventive maintenance work.”

Modern asset-management methods call for proper attention to be paid to equipment systems throughout their life cycles: from design and procurement through installation, commissioning, operation and maintenance to renewal and/or decommissioning. This is not something a site can just add to its wish list and forget: The new International Asset Management Standard (ISO-55000, issued in January 2014) requires asset risks to be identified and appropriate risk-management practices put in place.

It’s important to remember that “critical equipment” not only includes production-related processes, utilities, facilities equipment and the automated systems that run them, but also health, safety and environmental-related equipment. Often, your most critical assets may also be the most at risk if they fail to perform reliably.

The following story by PotashCorp’s Matthew Fenwick makes a good case for how establishment of sound criticality determinations can set the stage for a variety of payoffs. In this first-person account, Fenwick discusses improvements in alert-monitoring-device strategies that, among other things, allow his team to save time and better manage an increasing workload.

Matthew Fenwick, instrumentation technician with PotashCorp in New Brunswick, tells the following story about how they dug deeply to improve efficiency.

Our nine-person instrumentation team at PotashCorp’s New Brunswick (NB) division had responsibility for managing 2000 input/output (I/O) points in 2012. Knowing a 4400 I/O point expansion would come online in 2013, we had to find ways to save time and manage the workload. We needed to prioritize the right devices and alerts, reduce time spent on troubleshooting and increase technician efficiency.



With improved maintenance, PotashCorp NB is prepared for growth.

Identifying the right devices
We first tackled the Alert Monitor function in our asset-management system, Emerson’s AMS Device Manager. With hundreds of alerts coming in, we needed to know which ones were most important for our business. We started by rating plant areas based on criteria such as safety considerations, regulatory compliance, product quality, process throughput and operational cost. Next, we prioritized the loops and devices according to how critical the asset is and how often it fails.

The resulting maintenance priority index gave us insight into which areas to target for process changes and which alerts should be configured and channeled to the maintenance-planning department. We’ve established a weekly checkpoint to monitor and process these alerts as part of a proactive maintenance approach. At a glance, technicians can view the Alert Monitor on any engineering station in our plant. They can identify the potential bad actors, do further investigation and make modifications to correct the deficiencies before they become failures.

Reducing troubleshooting
The next step was to dig deeper into the way we applied the principles of asset criticality to our daily work. Specifically, we needed to improve our valve signature management, which was historically our biggest and most expensive area of failure. My wake-up call was a 3:00 a.m. emergency for an issue with a major control valve, resulting in the loss of six hours of prime production. Our maintenance superintendent asked why we had not identified the problem earlier. I told him, “You can’t predict what you don’t scan.” In other words, if we’re not monitoring a device, we can’t predict when it will fail.

The experience led us to reexamine our asset criticality sheet and to recognize that we were not adequately accounting for the way our harsh environment was impacting valves. Potash mines are basically salt mines. Combined with the plant’s eastern seaboard location, there is potential for valves to be destroyed by humidity and salt from the outside and slurries from the inside.

Today, rather than relying on reactive maintenance strategies, we’re using predictive diagnostics to plan our work. To account for environmental impact, we adjusted our valve criticality to ensure we were looking at the valves that are most vulnerable. We calculated a Valve Maintenance Action Plan (VMAP), establishing rules to dictate how frequently we perform signatures. For example, if the VMAP is greater than 400, we perform signatures every three months. If the VMAP is 300-400, we perform them every six months.

We also make use of the VMAP information to set a schedule for Emerson’s AMS ValveLink performance diagnostic sweep. This gives us a snapshot of the integrity of the valve components using five online tests: Supply Pressure Diagnostic, Relay Adjustment Diagnostic, I/P & Relay Integrity Diagnostic, Travel Deviation Diagnostic and Air Mass Flow Diagnostic. We now have the information to know when we need a full off-line diagnostic test.

By using this methodology, we have a handle on our signatures and can decrease the negative effects that cause reduced tonnages and production losses. We have experienced less downtime due to valve failures. We are still in the preliminary stages of full valve maintenance reliability, but we anticipate significant benefits.



Potash flotation cells at the New Brunswick site.

Information at our fingertips
Besides focusing on prioritization, planning and proactive maintenance, we found ways to get more out of our technology. For example, when I joined the company in 2010, AMS Suite software was only used as a storage facility for configurations and device checks during start-ups. After I spent time exploring the tool in depth, I saw the potential to use it as a way to instantly bring information to technicians.

We created an embedded program in the AMS Device Manager—using a Microsoft Access Database run by Visual Basic—to provide access to all our maintenance documentation with the click of a mouse (no more searching in the maintenance shop or control room for the book with loop configuration diagrams). Similar to a personalized search engine, we have all needed documentation in an organized, easy-to-find format that we call the Instrumentation Information Web.

In 2012, we captured the impact of the Instrumentation Information Web through a pilot project. Our instrumentation planner estimated the team would spend 807 hours working on proactive maintenance tasks based on work orders. However, with the implementation of the Instrumentation Information Web, the department spent only 376 hours on proactive tasks, a savings of more than 50% over the course of the pilot.

We have also seen substantial savings in commissioning and alerts. We devised a manual for alarm configuration and installed it on a Wireless Mobile Worker application. This has saved thousands of hours in commissioning time.



Matthew Fenwick, Instrumentation Technician, PotashCorp NB

Realizing the potential
The secret to realizing the full potential of our technology lies in our corporation’s Champion Concept. Almost three years ago, Bob Emery, Instrument Supervisor, PotashCorp NB, developed the vision for specialization. He saw that with the number of technologies PotashCorp was implementing, there wouldn’t be enough time to train all technicians in all technologies. He also recognized the frustration he was seeing on his team, so he began developing champions for each technology.

Specialization brings other benefits for PotashCorp NB. The champion model increased our ability to respond more quickly and solve problems in-house. For example, the event logger was frequently overloaded because we had difficulty understanding how alerts propagated through integrated DeltaV, AMS Device Manager and ValveLink applications.

Our in-house technology champions and others performed testing and gained a thorough understanding of how alerts are processed. Working together, we created a guideline for configuring alerts in DeltaV. The approach allowed us to prevent nuisance alerts and filter by transmitter or card-level. This led to better alarm-management and reduced the burden on the event logger.

In addition to the benefits obtained by the company, we benefit professionally. Although initially the change was difficult, specialization allows us the time to hone our craft, develop deeper knowledge and engage our creativity.

We invested the time upfront to make significant changes in our maintenance practices. What we gained is confidence that we are monitoring the right assets, solving problems more quickly and giving our technicians what they need to do their jobs well. Our practice will continue to evolve, but now we’re ready for the plant growth that lies ahead.

Matthew Fenwick is an Instrumentation Technician for PotashCorp of Saskatchewan, in Penobsquis New Brunswick, Canada. A graduate of the Industrial Control Technology program, he has worked in mining and pulp and paper industries for the past 10 years. For more information on his success story, email

Some caveats

With the ISO-5500 Asset Management Standard, it’s more important than ever for operations to accurately define and document their critical equipment assets. A roadblock for facilities that haven’t yet completed this “must do”—or started on it—may be one of direction: What approach works best?

While you can find plenty of tools, checklists and helpful advice on making critical-equipment determinations, keep in mind that they don’t reflect universal solutions. What’s appropriate for one type of operation may not be for another. There are several factors for a site to consider before adopting a specific strategy, including its industry sector and any standards related to the assets used in it. Do your research.

An additional caveat comes from Doc Palmer, author of McGraw-Hill’s Maintenance Planning and Scheduling Handbook. While he applauds the use of criticality rankings in developing maintenance strategies, he cautions operations to not let those rankings complicate the reporting and addressing of the work itself. According to Palmer, personnel writing work requests need a simple way to communicate urgency based on time. “If we’re not careful,” he says, “injecting the criticality ranking into some calculated equation could hinder the ease of this important communication.” (Palmer’s article “Simplify Your Priority System,” from the May 2010 issue of this magazine, discusses various priority systems.)MT&AP

For more information on principles of asset criticality, refer to Bob Williamson’s Jan. 2013 Uptime column, “Equipment Criticality: Life in the Fast Lane,” or email

For more information on how the application of asset-criticality principles meshes with successful maintenance planning and scheduling, please visit, or email


10:54 pm
April 21, 2014
Print Friendly

Sustaining Operator-Driven Reliability


Research shows that ODR success can be elusive. Here’s how to make it work.

By Jane Alexander, Deputy Editor

It seemed like a good idea. Everybody appeared to buy in. Big investments in time, resources and technologies were made; considerable thought, communication and hard work were applied. Finally, everything came together: Your site implemented Operator-Driven Reliability (ODR). Congratulations! But how is it working for you now?

ODR (the formalized involvement of operators in a facility’s asset-reliability efforts) isn’t new. Typical operator activities can range from collecting electronic data on vibration, temperature and the like, and noting abnormal equipment conditions; to performing routine replacements of gauges and other devices and helping technicians verify shutdown procedures; to assisting with equipment testing. Operators do NOT conduct data analysis within the scope of ODR. While this brief description of what ODR is and isn’t might seem straightforward, some plants have found the process to be more challenging than they expected.

According to experts, one of the biggest challenges organizations face is how to sustain an ODR program. Research conducted by bearing-maker SKF—an ODR pioneer, having helped implement and sustain it in hundreds of operations around the world—has uncovered several reasons why (see Chart). They include: inadequate preparation prior to implementation; a change in or lack of management; changing corporate initiatives; technology barriers; even sabotage. But these roadblocks can be overcome.


“Sustaining ODR doesn’t have to be a struggle,” says Dave Staples, SKF Global Services Manager for Traditional Energy. As with any other important enterprise initiative, a key to achieving success in ODR is to approach it as a process, not as a project. “Projects have ends,” explains Staples. “Processes live, grow and improve.”

In Staples’ experience, rigorous, informed upfront planning is necessary for ODR success. His recommendations include:

Clearly define and communicate goals and objectives. Because operational needs and goals differ, it’s important to align ODR with corporate business goals. Also, ODR goals and objectives must incorporate current reliability strategies, practices and philosophies.

Ensure cross-discipline support. Staples reminds plants that to have an impact on asset and process reliability and performance, an ODR program needs support not just from Maintenance and Operations, but from the Engineering and Reliability organizations as well.

Obtain commitment from the plant floor to the executive suite. Management provides credibility for ODR and ensures funding and resource allocation. It also confirms ODR as a company priority. “It’s critical to identify ODR supporters and potential non-supporters early on,” says Staples, “and know where to look for improvement opportunities.” Because ODR is a process that operators will own, they should be involved in and committed to it from the start, even if they’re unionized. Identify a sponsor or champion from within the operations team whom all operators respect.

Plan for resistance to change and how to deal with it. Pushback on ODR can come from many sources, requiring several different management responses. According to Staples, responses can include reward and recognition for workers who are doing the right things; following compliance measurements and using the findings to drive corrections; supporting operator findings by correcting abnormal conditions quickly; and engraining the ODR process and findings into daily operations by hosting weekly status and planning meetings.

Embed decision support. Embedding knowledge allows operators to make better decisions without requiring basic guidance from the plant’s skilled workforce. For example, Staples says that with an automated fault-diagnosis process and the use of decision trees and Boolean logic, operators can be directed to collect additional information leading to the root cause of a problem.

Develop and track appropriate key performance indicators (KPIs). Organizations often overlook important metrics associated with ODR, either by putting them off until after implementation or not tracking them at all. Without KPI’s, says Staples, “The ability to justify, learn, improve and optimize is lost.” He recommends that KPIs be established as early as possible in the implementation process to allow benchmarking of current status and measuring of improvements. “Make them readily available to those responsible for ODR at the site,” he says, “so they can track the results of their efforts or the need for corrective actions.” Because KPIs will drive the direction of ODR efforts, it’s imperative to review them regularly and confirm that they continue to help achieve the company’s strategic goals.

Have a program life-cycle cost plan. Mobile technology changes quickly. Plan to replace hardware at the end of each cycle. Staples says that although there will be incremental changes to technology throughout every life cycle, they’ll generally be software-related. Annual maintenance agreements guarantee sites that they will always have the latest version of software and access to new features during a life cycle.

Staples also encourages sites to:

  • Automate ODR with technology, but keep technology transparent.
  • Provide effective levels of operator feedback (see Sidebar).
  • Standardize all best practices and share them.
  • Plan for ODR expansion and how to pay for it.
  • Maintain a detailed, living training plan.

One of Staples’ most valuable pieces of advice regarding ODR is to manage people’s expectations about it from the outset. Keep in mind that ODR is not intended to be a complete maintenance solution. While operators “inspect,” for example, they cannot be expected to “analyze.”

As Staples wrote previously in this publication (Feb. 2007), “ODR is best considered a complementary practice, and is almost always part of a strategically applied maintenance plan to achieve asset reliability and availability aligned with a company’s business objectives.”

This approach has weathered well over the years, and it will work for your ODR program, too. MT&AP

The Two Levels of Operator Feedback

The type of feedback an operator receives and how it is delivered will impact his/her response. There are two levels of operator feedback, each designed to elicit a specific response:

The first level explains the need for an inspection process relative to possible present conditions, causes, related problems and consequences. The second level uses documentation, graphics and photos to drive immediate action. These elements should be available via on-board memory or through the Wi-Fi capabilities of mobile devices.

Be creative in your approach to feedback. The goal is to simplify operator input and minimize the typing of inspection results.

ODR Supports Business Excellence

Operator-Driven Reliability complements Business-Excellence (BE) programs in several key ways:

Empowerment is reinforced because operators have ownership for their equipment’s reliability. With this responsibility comes the ability for operators to initiate decisions for actions based on abnormal conditions they uncover.

Standardized work is managed through technology. The idea of BE programs is for all operators in a facility to work according to defined principles. Operator ODR rounds are detailed in the inspection technologies they use (i.e., what machines to inspect and how; what abnormal conditions look like; and what to do next). And as operator information progresses from paper to digital form, it is shared faster, allowing for quicker corrective actions. Whether a site is implementing a new program or assessing a current one, the ODR process forces an environment of continuous improvement, driving inefficiencies and waste out of the system and putting the focus on high-value activities, a BE fundamental.

Teamwork is promoted by both BE and ODR. When ODR is appropriately planned, implemented and managed, equipment reliability becomes an enterprise-wide endeavor. Working as a team, people throughout the organization—regardless of department, title or specific responsibilities—can truly impact reliability improvements.


Reward, Incentivize, Recognize

Rewards and Incentives are integral parts of most culture-changing programs, including ODR, because they reinforce acceptable behavior.

Examples include cash, trips and gift certificates; Operator of the Week/Month programs; and tie-ins to existing company initiatives, such as profit-sharing. Once a desired behavior becomes a required behavior, use of rewards and incentives can be minimized or eliminated.

Recognition Programs are the basis for quantifying return on investment. They document the value proposition that ODR delivers to the business.

In contrast to rewards and incentives, recognition programs must remain in place. Keeping them fresh means keeping them visible—on posters, in newsletters, through tradtional news outlets and via social media—and posting KPIs.

Operator-Certification Programs can also be used to transition away from rewards to recognition. Additional certifications can validate operator rank or wages.

Certification programs can be qualitative or quantitative forms of recognition. Quantitative programs, though, must have real dollars tied to them and should be
tracked closely.


10:46 pm
April 21, 2014
Print Friendly

Technology Drives Public-Transit Efficiencies

26 March 2014

Neil Roberts, Yarra Trams


An Australian tram operation uses a centralized data system to improve maintenance and performance.

Public transportation systems depend on a complex combination of equipment to function, from wheels and axles to power lines and tracks. The infrastructure that makes a transportation system run must be efficiently maintained to prevent service delays and ensure that passengers arrive at their destinations safely and on time.

What if trams and trains could tell operators that a wheel needed to be fixed before it broke, or that a particular route was delayed because of bad weather? Advances in technology are unlocking this type of insight into the health and efficiency of public transit systems, enabling operators to improve maintenance efficiency, reduce downtime and better meet passenger demands.

As the Director of Information, Communication and Technology (ICT) for KDR Victoria, operator of the Yarra Trams system in Melbourne, Australia, my job is not just about making sure that our servers run well and our email functions properly. My team and I implement technology to enhance the passenger experience and operational effectiveness of the largest operating tram network in the world. Thanks to technology, our trams can now alert a maintenance team when and where a repair needs to be made, or tell passengers via the free tramTRACKER smartphone application when the next tram will arrive at their stop.

Our iconic tram system has been in operation for over 100 years. Today, it encompasses more than 91,000 pieces of equipment, including 250 kilometers of double tracks, eight different classes of tram, 500 kilometers of power lines, wheels, axles, bogies and much more. Maintaining this infrastructure is a complicated web of overlapping schedules, necessary repairs and the very different upkeep concerns of new and old equipment.

Service disruptions can be caused by anything from equipment failures, to bad weather, to heavy vehicle and pedestrian traffic. Our safe, efficient transport of nearly 200 million passengers annually calls for a rapid response to such disruptions, along with effective preventive and predictive asset-management practices and frequent customer communications.


The main operations center of Melbourne’s Yarra Trams relies on smarter technology to enhance the passenger experience and operational effectiveness of the largest operating tram network in the world.

Details in the data
To keep everything running smoothly, we’ve implemented a technology system that incorporates data, smarter infrastructure software and analytics and both mobile and cloud computing. This system turns 91,000 different pieces of equipment—from trams to power lines and tracks—into 91,000 living, talking data points, some with data-transmitting sensors. The data, which is not only collected through sensors but also via employee and passenger reports, unlocks the visibility of vital signs to help us understand the health and efficiency of our network.

Data collected about tram service and functions is hosted on one centralized system—IBM’s Maximo—and is accessible by certain employees to encourage cross-organization collaboration. Using IBM Smarter Infrastructure software, different functions can analyze the data to garner information about improving response to maintenance issues, preventing service delays and re-routing trams. Insight gathered from the software is also used to send work-order alerts to maintenance teams.

Maintenance workers remotely access work orders and receive up-to-date asset information on mobile tablets, helping to improve repair management and respond quicker to potential disruptions. After a work order is completed, maintenance crews use the tablets to log how much time was spent on a repair and details about any follow-up that may be necessary. Repair logs are then used to identify trends and triggers that cause delays that can be avoided with predictive maintenance.

Trends or patterns in tram and infrastructure repair history are identified through data analysis and used as a guide for scheduling predictive maintenance, which minimizes service downtime by enabling maintenance teams to fix equipment before it breaks.


Embedded in the tracks, smart sensors like these provide a range of critical operational data, including information on needed tram maintenance.

Responding to an out-of-shape wheel: An automated wheel-measurement machine detects a tram wheel that may have become out of shape after wearing over time on tram tracks. Information about the impending repair need is used to alert maintenance crews, who complete the work and can record all details on a mobile tablet. The repair log is compared to previous wheel repair logs and used to schedule preventative maintenance.

Keeping trams running rain or shine: Melbourne has an annual rainfall of more than 24 inches, and it is common for some streets and tracks to flood when it rains. Collected data has indicated problem-prone areas, enabling maintenance crews to take precautions to prevent flooded tracks. Additionally, if tracks do flood, response crews are quickly alerted via mobile devices, making a quick response possible before service is delayed.

Allocating equipment to accommodate heavy passenger traffic for special events: When events like the Australian Open tennis tournament are held, we allocate trams to specific areas where heavy passenger traffic is expected. Passengers are alerted about service changes via tramTRACKER.

Deploying the technology system has been a gradual, ongoing process that has involved retrofitting older trams, equipment and power substations with sensors, as well as building and installing new equipment, such as our E-Class tram. The E-Class trams are equipped with Wi-Fi to enable information about tram health and efficiency to be downloaded when a tram returns to the depot. The next-generation E-Class tram began carrying passengers in November 2013.

The new technology system has also allowed our entire organization to transition from a paper-based asset-management system to IBM Smarter Infrastructure software. The enterprise-wide initiative has enabled us to consistently exceed our key performance measurements around tram service and punctuality. In October 2013, service delivery was 99.11% and tram punctuality 82.70% (against targets of 98% and 77% respectively). That was a great result given that 80% of the network shares road space with motor vehicles.

The future of public transportation is wide open with opportunities to apply technology in innovative ways to improve efficiency and reliability, in turn boosting passenger usage and stakeholder confidence. The way Yarra Trams uses sensors, data, analytics, cloud and mobile technology today is just the foundation. We look forward to evolving our transit system to benefit from what technology makes possible. MT&AP