LENS wireless technology allows gas monitors to communicate with each other with no need for IT setup, infrastructure, or a central controller. The company’s Ventis Pro series and Radius BZ1 monitors are available with the technology.
Industrial Scientific Corp.
LENS wireless technology allows gas monitors to communicate with each other with no need for IT setup, infrastructure, or a central controller. The company’s Ventis Pro series and Radius BZ1 monitors are available with the technology.
The Model 2210 series low-noise, single-axis accelerometer modules integrate a MEMS VC accelerometer chip with high-drive, low-impedance buffering for reported reliable, repeatable measurements. When used with the company’s mounting block, the unit can measure on one, two, or three axes, allowing specification of a single part number for multiple requirements. Available in ranges from +/– 2 to +/– 400 g, the series generates two analog voltage outputs that vary in response to applied acceleration. The modules provide either single-ended or differential output, the latter double accelerometer sensitivity over single-ended versions. Applications include lower-frequency vibration testing.
Silicon Designs Inc.
We face challenging situations every day. In many cases, dealing with short-term challenges is a maintenance organization’s normal way of life. The problem is our long-term challenges, the ones at our doorstep, or looming just over the horizon that we often put off tackling. They’re “giants” bearing down on us.
Not too long ago, I spoke to nearly 90 maintenance professionals at an Oklahoma Predictive Maintenance User’s Group (OPMUG) event. Maintenance managers, supervisors, technicians, mechanics, planners, and engineers, they came from a wide variety of industries. Regardless of their particular role or business, though, they were all actively pursuing better maintenance practices.
I asked the attendees to take a few minutes and think about the top three challenges for maintenance that they expected to see in the next three, to five, to 10 years, then record them on note cards. Let’s consider what they wrote and how their thinking mirrors yours. Based on my analysis, the 117 challenges they came up with fit in the following nine major categories (some fit in more than one):
• Skills Gaps (35)
• Culture of Reliability (35)
• Training & Qualification (27)
• Top Management (26)
• New Technology (11)
• At-Risk Assets (10)
• Parts (10)
• Knowledge Transfer (8)
• Life-Cycle Asset Management (5)
It’s about ‘people’ on the front line
When we look for a common theme among the OPMUG responses, it’s not too surprising to see that it’s “people,” i.e., the biggest variable in improving equipment maintenance, performance, and reliability. Of the nine major categories above, three of them—Skills Gaps, Training & Qualification, and Knowledge Transfer (with a combined total of 70 responses)—point to challenges on the front line of maintenance.
Many responses alluded to difficulties in finding qualified technicians and shortages of skilled trades people. A few referenced the Millennial Generation’s communication skills, work habits, and expectations. Several addressed the lack of competencies for and interests in industrial maintenance careers.
Capturing the knowledge of workers nearing retirement appeared to be a sizeable challenge for many respondents. They noted that their organizations stood the chance of losing all skills and knowledge gained from years of experience. Furthermore, there was concern that even if they could capture crucial knowledge, without a capable replacement or the mechanism to train new employees, that knowledge would be lost.
It’s about ‘people’ in top management
A second group of categories—Top Management, Culture of Reliability, and Life-Cycle Asset Management (with a combined total of 66 responses)—points to need for leadership to improve equipment maintenance, performance, and reliability. Whether it’s the pursuit of best practices, asset-management processes, or culture change, top management sets the tone and defines the culture by purposeful actions, or, by default, through inaction.
Some responses tied the challenge of Top Management to struggles with hiring and training priorities, i.e, management’s inability to grasp the severity of skills gaps, shortages, and knowledge transfer. Several mentioned decisions to cut maintenance costs and staff, reductions in time for preventive maintenance, and misinterpretation of the reliability requirements of new equipment.
Others referred to “silo” organizations and decision making that hindered maintenance and hurt the reliability of equipment and processes. These siloed objectives and decisions lead to an organization’s inability to focus on common goals for overall business improvement.
Regarding Culture of Reliability ranking right up there with Skills Gaps as a top challenge: Leading a culture of reliability means that the line of sight between reliability best practices and the goals of the business are understood. Frequently, that line of sight is not so apparent with reliability best practices appearing as a flavor of the month.
Facing our giant
Most equipment challenges lend themselves to reliable and sustainable countermeasures, or corrective actions. The giant we face isn’t so easily addressed: human variation, inconsistency, behaviors, moods, and habits present an ever-changing reliability improvement challenge.
Our giant can be lurking among front-line crews or behind decisions and actions made by top-, mid-level and/or front-line managers. Facing it with slingshots and stones may be our only option, that is, if slingshots and stones represent maintenance fundamentals, available tools, and accepting the reality of the situation.
We can no longer manage equipment performance and reliability the way we always have. There aren’t enough talented people, or isn’t enough time or money to continue that journey.
Bottom line, the skills gaps we see today, coupled with training and knowledge-transfer problems, are primarily caused by the fact that top management and reliability and maintenance professionals still aren’t “sitting at the same table” and focusing on common business goals. That’s sad.
Looking to the future, facing our giant will require fewer hands-on people, robust condition monitoring, building reliability into critical at-risk equipment, and, most of all, getting top-level management to believe in reliability best practices. MT
Bob Williamson, CMRP, CPMM, and member of the Institute of Asset Management, is in his fourth decade of focusing on the “people side” of world-class maintenance and reliability in plants and facilities across North America. Contact him at RobertMW2@cs.com.
By Grant Gerke, Contributing Editor
Digital transformation applications in 2017 are moving fast and taking diverse forms. Many industries, such as oil and gas and petrochemical, are quickly acting on better data-acquisition models so operators can move toward online condition-based monitoring for pumps and motors.
According to Brian Atkinson, a consultant with the Industry Solutions Group of Emerson Process Management (emersonprocess.com, Shakopee, MN), pumps account for an estimated 7% of maintenance costs of a plant or refinery. “While a pump failure in a refinery may only affect one part of a process,” he said, “pump failures in an oil field can shut down a well or pipeline,”
During the oil-market boon, operators took run-to-failure approaches with pumps and motors, and didn’t install cost-prohibitive wiring to monitor such units in the field. Wireless-network-standardization efforts over the last decade, however, have provided operators the ability to implement condition-monitoring strategies and avoid costly shutdowns that may seem necessary in lower-price markets.
As an example, Atkinson pointed to a white paper, titled, “Beyond Switches for Pump Monitoring,” from Emerson Automation Solutions. It details how oil and gas processing facilities can use cost-effective transmitters to provide continuous condition monitoring and a richer data set on in-the-field pumps. Among other things, it recognizes the American Petroleum Institute (API) Standard 682 that provides a roadmap for achieving continuous monitoring with IIoT-based solutions. This standard defines piping plans for pumps to assist processing facilities for the selection of the type of sensors and controls for pump auxiliary-seal flush systems.
The white paper illustrates that traditional mechanical switches provide on/off data, while transmitters can communicate a broad range of measured variables and facilitate remote configuration, calibration, and diagnostics. With the transition to transmitters in the field, management can reduce field-maintenance service trips and reallocate those services to other resources.
A prime example of the process industry’s move to continuous, remote monitoring is Pioneer Energy’s captured gas-flaring application for remote shale fields. The Lakewood, CO-based corporation (pioneerenergy.com) provides a turnkey service that captures flared gases at the field site by way of a Mobile Alkane Gas Separator (MAGS) unit that’s separate from the well-drilling application.
Oil-and-gas-shale producers have usually thought of flared gas as a waste product. Remote monitoring, though, gives them the ability to resell or use it to power drilling operations wherever they may be. In Pioneer Energy’s case, that means being able to monitor the gas-separation unit in a central control room hundreds of miles away from well sites.
Pioneer Energy still provides technician services for minor maintenance of its remote MAGS units. According to the company, it uses Opto 22’s groov mobile monitoring to provide field technicians monitoring and control onsite through mobile devices.
“Our service technicians in the oilfield have 4G AT&T tablets that link to the groov server, which is connected to the OPC server,” said Andrew Young, lead controls engineer at Pioneer Energy Services. “They can see real-time operations as they’re enroute to a site to do a service call.”
Pioneer Energy’s gas-separator service is the embodiment of a new business outcome enabled by advanced sensor networks in a legacy environment. These types of small optimization strategies have begun to take hold in the oil and gas industry, and should be the rule instead of the exception going forward. MT
Critical-asset data can help identify failures before they occur to avoid downtime and protect the bottom line.
If you could see into the future, you would never miss a production target, endure a safety incident, or have a machine go down. Unfortunately, unless we somehow gain the power of clairvoyance, this fantasy will forever be out of our reach. While we may not be able to see into the future, we can predict it.
By adopting a predictive-maintenance (PdM) strategy, you can mine your critical-asset data and identify anomalies or deviations from their standard performance. Such insights can help you discover and proactively fix issues days, weeks, or even months before they lead to failures. This can help you avoid unplanned downtime, reduce industrial maintenance overspend, and mitigate safety and environmental risks.
The case for predictive maintenance
The sudden loss of a critical industrial asset can be devastating. It can result in unplanned stoppages and maintenance that eat away at your bottom line, while production remains at a standstill. This was the situation for one company operating an oil-sands mine in Canada. The company had to shut down the operation after detecting vibrations in an ore crusher, resulting in a weeks-long production stoppage that had been averaging more than 90,000 barrels/day. According to analysts, each week of downtime reduced quarterly production by about 1.5% and cash flow by about 1%.
Beyond the impact on production and profits, unexpected failures also can cause catastrophic events, such as explosions or chemical leaks, that threaten lives and the environment.
Many companies use robust industrial-maintenance programs and costly maintenance-service agreements to help avoid these issues. However, even the most comprehensive maintenance programs likely won’t eliminate all unplanned downtime. It can only take one failure to grind your operations to a halt for an extended period of time.
A smarter approach
Predictive maintenance delivers a more data-driven approach to industrial-maintenance programs. It uses predictive analytics and machine-learning algorithms, based on historical and real-time data, to identify specific issues on the horizon. Often these issues won’t show any physical signs of degradation—even a sharp human eye or an intuitive and well-trained maintenance technician wouldn’t be able to catch them.
In addition to helping prevent downtime, a PdM approach can better identify true maintenance needs. This can assist in making sure that you are targeting personnel depolyment, maintenance activities, and maintenance dollars where they are needed most.
Predictive maintenance can be especially useful in industries where the uptime of critical assets drives the bottom line. This includes large, heavy equipment in oil and gas, and mining operations, as well as critical machines in continuous-manufacturing operations.
A perfect example is a large, multistate compressor that experienced a bearing failure resulting in more than $3 million in maintenance and lost productivity. A postmortem on the incident, which involved reviewing 16 months of data, found that the bearing cooling system had not been operating correctly for six months.
Had this data been used as part of a PdM strategy, the company likely would have been able to identify the bearing degradation and its root cause before the failure actually happened. What’s more, the company would have been able to identify detailed preventive-maintenance steps for the cooling system.
Predictive maintenance also can be valuable in operations that experience high maintenance costs.
Often, companies can invest a lot of time and resources in maintenance but lack data to know whether their strategy is effective and addressing their actual needs. Predictive maintenance can help uncover unnecessary maintenance, which could save millions of dollars every year in some industries. This was another discovery in the compressor case. The company was performing certain maintenance activities that were unnecessary and could have been eliminated.
How it works
Predictive maintenance doesn’t require an extensive overhaul of your infrastructure. Rather, it can be deployed on your existing integrated-control and information infrastructure.
The process begins with discussions to identify what data you want to collect, what potential failures or other issues you want to predict, and what issues have arisen in the past. From there, the relevant historical data is collected from sensors, industrial assets, and fault logs.
Predictive-maintenance analytics software then examines the data to determine root causes and early-warning indicators from past downtime issues. Finally, the analytics software develops and deploys “agents” that monitor data traffic either locally or in the cloud.
Analytics software uses two types of agents. The first type is failure agents, which watch for patterns that are known to predict a future failure. If such patterns are detected, the agents alert plant personnel and deliver a prescribed solution.
The second type is anomaly agents, which watch normal operating patterns and look for changes, such as operating or environmental-condition changes. These agents also alert personnel of any detected changes so they can investigate and take corrective action if necessary.
Your crystal ball
Predictive technology has been around for decades. It’s used to detect credit-card fraud, fine-tune marketing programs, and even help us search the Internet. Its role in the industrial world takes the form of a rigorous documentation of events and failures that can help us see and address machine or equipment issues in their earliest forms.
Many manufacturers already see the value of historical failure reports as a tool to help prevent failures and downtime in the future. By using this data, which already exists in your assets, you too can reduce downtime surprises, cut down unnecessary maintenance, and potentially reduce risks in your operations. MT
Information for this article was provided by Doug Weber, engineering manager, and Phil Bush, remote monitoring and analytics product manager, Rockwell Automation, Milwaukee. For more information, visit rockwellautomation.com.
Biopharmaceutical manufacturing company AstraZeneca redefines reliability to streamline more-effective maintenance processes.
By Michelle Segrest, Contributing Editor
Even though the AstraZeneca manufacturing facility in Mt. Vernon, IN, looks like a hospital surgical unit—with key equipment separated into concentrated clean rooms—for years it operated like an emergency room. When an equipment breakdown occurred, personnel jumped into action, triaging the issue and not always looking into the true symptoms to prevent future occurrences.
The company acquired the Mt. Vernon facility in August 2015. With a new reliability unit in place and an Operations Excellence Team, the site now has teams focused on preventing emergencies, instead of addressing them.
Reliability and maintenance can be a challenge when maintaining a high standard for the pharmaceutical environment. As you walk through the facility, the white walls and floors glisten against the shiny, almost mirror-like, stainless-steel equipment. Equipment and personnel rooms serve as airlocks between the corridors and the manufacturing rooms. The airlocks are guards against dust, dander, allergens, or other elements that could contaminate the critical medicine that is being manufactured. The switch from a reactive to a proactive, risk-based, approach has taken reliability in the 700,000-sq.-ft. manufacturing area to a new level.
“Our first step was to separate our reliability team from the day-to-day commotion,” explained facilities engineer Andrew Carpenter. “We had to be sure they understood that reliability is different than maintenance, and we had to all take this seriously. We had many people who were specialists and were relied upon for troubleshooting and fixing emergency issues. It was a complete mindset change.”
The new reliability team received support from upper management and buy-in from the team. Although some roles changed, the team remained headcount neutral. This, along with clear alignment of goals, became the keys to a successful transition.
“If you are starting a reliability program in your plant, call it what it is,” senior building and reliability manager Chris Nolan said. “Reliability is different than maintenance. The goal is to get to a certain utopia. As your group grows, you all become more focused on that reliability side, but when you are starting out with a reactive-maintenance program, and you want to transition to one that is reliability based, there is a different vision. This must be explained and understood. Now we have processes in place to aid in the prevention of emergencies and more organized efforts to quickly respond should the need arise.”
With an investment in new tools and technology, including additional vibration, infrared thermography, and ultrasound training, the newly structured, two-year-old team measures its return on investment in high-quality performance and products.
“A key driver within our business is quality,” Nolan said.
AstraZeneca is a science-led, biopharmaceutical business that discovers, develops, manufactures, and supplies innovative medicines for millions worldwide—primarily in the areas of respiratory, cardiovascular and metabolic, and oncology. The Mt. Vernon site manufactures oral-solids medicines—primarily for Type 2 diabetes treatment.
The maintenance and reliability group focuses on maintaining the utilities, purified water, HVAC, manufacturing equipment, and all Good Manufacturing Practice (GMP) maintenance.
A new process
The Mt. Vernon-site reliability team adopted a common mission statement from the industry. “Anyone who improves a process or a piece of equipment is a reliability leader.”
The simple vision was broken down into specific goals and targets. Nolan explained that 2015 was all about building a foundation, while 2016 was the year to focus on root-cause analysis. The team received early help from consultant group Life Cycle Engineering (LCE, Charleston, SC, LCE.com).
“In pharma, when somebody uses the word ‘criticality’ they go straight to quality,” Nolan said. “LCE helped us identify the tools we needed to show overall criticality—business cost, quality, mean time between failure. Andrew [Carpenter] led us through a criticality assessment at our site and we banked that into different categories, including equipment, water purification, parts redundancy, and packaging items. Now we do an assessment and re-rank our critical categories that need attention every year. We are in the process of doing that now. This helps us focus our efforts and has become a game-changer for us.”
The reliability group became its own entity within the plant’s maintenance organization.
“We were doing a really good job of fixing issues, but needed to work on following up after the issue, getting to the root cause, and putting processes in place to prevent the issue from happening again,” Carpenter said.
Two years in, Carpenter and Nolan are beginning to see the fruits of the team’s labor. “We can see that it is working and we have come a long way.”
Redefining the maintenance and reliability functions was an anchor in achieving some early wins for the new team.
“We are all here to get the product out of the door, but the difference is simply the things we focus on,” Nolan said. “Maintenance right now focuses on the day-to-day activities—the preventive maintenance piece and execution of that at a high level. But when you are executing you are challenged on the day-to-day things, so it is hard to find that balance of time to take a look back on the long-term items, like the vision. For us, the difference between maintenance and reliability is that reliability is getting into the data mining of the maintenance activities. Particularly in the pharma environment, that is a big piece that ties back to the quality culture, as well. The maintenance piece is very tactical, while reliability centers around more planning and vision.”
Carpenter said the team’s vision began to take shape when it zoomed in on the root-cause analysis program. About six months into the program’s launch, Nolan began to notice a distinct change in the culture.
“It was a Friday afternoon before a three-day holiday weekend and normally everybody was ready to scoot,” he said. “We had one of our metrology calibration technicians and engineering technicians having a serious conversation about a particular problem. It turned into an hour-and-a-half discussion of digging into really finding the problem, turning it into a root-cause analysis. That is the first time when I really thought this whole program began to click. These guys were looking beyond the fix and they were passionate about preventing it from happening again.”
Carpenter explained that the change involved a clear switch from simply fixing a problem to a focus on the big picture. “We are better at documenting the data and finding ways to prevent failures,” he said.
One of the areas the team focused on heavily at the start of the reliability program was predictive maintenance. Engineering technicians and predictive-maintenance technicians were sent to Level I vibration, infrared, ultrasound, and laser-alignment training. It didn’t take long to see the return on investment.
Nolan said another key win was bringing the storeroom into the reliability discussion.
“The storeroom is a key to reliability,” Nolan said. “Paying attention to what is going on in the storeroom tells you what is going on in the plant. What goes out of your storeroom is a huge check and balance of your maintenance process.”
Realizing how much can be learned from problems and mistakes also made a big difference.
“Problems are gold,” Nolan said. “Problems within your processes give you ‘aha’ moments. This allows you to bring people together to look at what is going on and talk about how can it be better. Don’t ever be afraid to share a problem because usually it can positively impact you, your group, or someone else.” MT
Michelle Segrest is president of Navigate Content Inc., and has been a professional journalist for 28 years. She specializes in the industrial processing industries and has toured manufacturing facilities in 41 cities in six countries on three continents. If your facility has an interesting maintenance and/or reliability story to tell, please contact her at firstname.lastname@example.org.
How well you treat these industry workhorses affects how long, how safely, and how cost-effectively they’ll run.
Belt-conveyor systems are used for a wide range of purposes. Regardless of the application, minimizing the cost per ton to move material and items without compromising safety, product integrity, and efficiency is accomplished by harnessing the best available technologies and maintenance practices.
Developing and implementing practical preventive-maintenance (PM) programs that have measurable results is key to reducing costs and maximizing your cost per ton. Continual daily upkeep is critical to extending conveyor belt and component life.
The entire system, including the belt, idlers, pulleys, frame, and accessories, should be included in the maintenance program. Routine system inspections, designed to encompass all aspects of each conveyor will help identify issues that, if not addressed and corrected, will cause catastrophic component failure, resulting in ancillary damage and potential safety hazards.
Prior to any inspection, perform appropriate lockout/tagout verification procedures. Ideally, the conveyor system is shut down and empty. This allows inspectors to check for damage to all components, including the belt and splice. Any damage noted during the inspection should be repaired as quickly as possible to prevent further degradation.
Keep in mind that the following checklists are general guides, and not all-inclusive. The key words are clean and operational. Pulleys or idlers that have material build-up on them will cause tracking problems. The same can be said for pulleys with uneven lagging wear. Belt-cleaning devices or systems, plows, and self-aligning idlers must be operational to perform their tasks. Belt damage, pulley damage, and tracking problems will result if these accessory components are not maintained.
Shut-Down-Conveyor-Inspection Checklist. A typical maintenance-inspection walk-through of a shut-down conveyor should include, but not be limited to, the following 19 items:
- Perform the lockout/tagout (LOTO) verification procedure.
- Identify safety hazards.
- Complete belt inspection.
- Inspect head pulley and/or drive pulley for damage, cleanliness, and worn lagging.
- Inspect for proper lubrication of bearings and mechanical devices.
- Inspect for the presence of material build-up and trapped material.
- Inspect skirting in the loading area for proper adjustment and condition.
- Inspect impact/slider bed or impact idler for damage and cleanliness.
- Inspect return- and carrying-side idlers for damage, cleanliness, and free-turning.
- Inspect all self-aligning idlers, both carrying- and return-side to ensure they are capable of operating (actuating from belt friction) and not tied off.
- Inspect for cleanliness of primary and secondary loading station.
- Inspect trippers to ensure they are clean and operational.
- Inspect structure/frame for integrity and alignment.
- Inspect tail-pulley condition.
- Inspect head-pulley cleaner to ensure it is operational.
- Inspect head, bend, and snub-pulley condition.
- Inspect the clean and operating take-up.
- Ensure plow (V-guide or angle) is operational.
- Ensure all bearings are clean and capable of operating.
Once inspection of the shut-down conveyor is completed, confirm that all personnel, tools, and equipment are clear of the system and accounted for to avoid injuries or damage to the equipment when it is restarted. Next, energize the system and let it run empty to ensure proper belt tracking. Perform another visual walk-through and listen carefully to make sure there are no unusual noises, which could indicate idler or bearing failure or rubbing of the belt against the conveyor structure. Be sure the belt is running reasonably well before introducing a load and conducting the next inspection. Note that empty and loaded conveyor systems may track differently. Furthermore, remember that any component with which the belt comes in contact will affect its tracking.
A typical maintenance-inspection walk-through of a loaded (running) conveyor system should include, but not be limited to, the following 13 items:
- Inspect for satisfactory tracking along the belt’s entire length.
- Inspect for and ensure there are no bearing noises.
- Inspect for primary and secondary loading-station spillage.
- Inspect carrying-side idlers to ensure they are turning freely.
- Inspect self-aligning carry idlers to ensure they are functioning (actuating from belt friction).
- Inspect for excess material spillage.
- Inspect head and/or drive pulley, snub, and bend pulleys to ensure they are running smoothly with no slippage.
- Inspect belt cleaners to ensure they are functioning.
- Inspect return idlers to ensure they are clean and turning freely.
- Inspect tail pulley to ensure that it is turning freely without product build-up or carryback.
- Inspect take-up pulley to ensure it is turning freely without bearing noise, is clean, and moving freely in the frame.
- Inspect for belt tracking, in general.
- Inspect plow (V-Guide or angle) to ensure it is operating properly.
Following completion and documentation of these inspections, a corrective-action plan should be implemented. Any safety concern must be addressed immediately, including, among other things, installation and/or repair of conveyor crossovers, safety-stop cables, failed holdbacks on incline conveyors, misalignment switches, motor guards, hand rails, and cleaning of walkways.
The importance of clean conveyor systems can’t be overstated. Cleanliness is a safety issue. Premature conveyor belt wear, idler and pulley failure, along with structural damage to the conveyor frame are all indicators of a system experiencing significant carry-back and fugitive-material contamination. Product build-up on return-side pulleys and idlers not only reflects a housekeeping issue, it can lead to belt-tracking problems and added stresses on the splice. If a belt isn’t clean on the return flight, any pulley that comes in contact with the belt’s carry side will accumulate product.
Material build-up on a belt and components doesn’t simply cause tracking problems. It could bring a system to a grinding halt, costing the operation countless dollars in lost material, downtime, clean-up, damage to the system, and, potentially, personal injuries. A clean conveyor system is not only a safer system, it can maximize your cost per ton.
Primary and secondary belt-cleaning systems at the discharge area and plows in front of the tail pulley are essential to reduce damage to the components. Sticky materials present a real challenge when it comes to preventing carryback. A well-engineered and maintained cleaning system to minimize carryback will reduce associated cost. Some variables to consider when designing and installing a cleaning system include the material to be conveyed, environmental and operational factors, and belt type and condition.
It’s a given in any plant: Safety should be the number one priority of all owner/operators and workers, and an integral part of the workplace culture. Zero is the only number acceptable for incidents and accidents. Safe habits take effort to develop, and are less likely to be broken when developed. Once a culture of safety is established in any organization, it will perpetuate itself.
Constantly pay attention to your work environment and those working around you. This situational awareness could prevent a possible accident before it happens and save you and the organization unwanted pain and expense. When it comes to conveyors, keep these basic safety tips in mind:
- Always perform proper lockout/tagout verification procedures.
- Use only trained and authorized maintenance and operating personnel.
- Keep clothing, fingers, hair, and other body parts away from moving conveyor parts.
- Don’t climb, step, sit, or ride on conveyors.
- Don’t overload conveyors.
- Don’t remove or alter conveyor guards or safety devices.
- Know the location and function of all stop/start controls and keep the locations free of obstructions.
- Confirm all personnel are clear of a conveyor before starting or restarting it.
- Keep areas around conveyors clean and clear of obstructions.
- Report all unsafe practices to a supervisor. MT
Information in this article was provided by Don Sublett of Motion Industries (Birmingham, AL). Sublett has worked in areas of conveyor-belt design and service since 1976 and is an active member of various professional associations in the field. For more information, visit MotionIndustries.com or see the Mi Hose & Belting video here.
Want to get the most from your electric motors? Think of St. Louis-based EASA (Electrical Apparatus Service Association, easa.com) as a treasure trove of practical information and its members as a “go to” source for help with specific applications. Consider this insight on motor/system baselines.
— Jane Alexander, Managing Editor
According to EASA’s technical experts, changes in motor/system vibration readings provide the best early warning of developing problems in a motor or system component. Other parameters to monitor may include operating temperature of critical components, mechanical tolerances, and overall system performance, including outputs such as flow rate, tonnage, and volume.
Motor-specific baselines incorporate records of electrical, mechanical, and vibration tests performed when units are placed in operation or before they’re put in storage. Ideally, baselines would be obtained for all new, repaired, and in situ motors, but this may not be practical for some applications. These baselines typically include some or all of the following:
Changes in these parameters usually indicate that a vital system component is damaged or about to fail. Other electrical tests may include insulation resistance, lead-to-lead resistance at a known temperature, no-load current, no-load voltage, and starting characteristics.
QUICK TIP: Some changes in the current and speed may be normal, depending on the type of load.
Motor current signature analysis (MCSA)
This test diagnoses squirrel cage rotor problems, e.g., broken bars or an uneven air gap. It’s more accurate if a baseline is established early in the motor’s life.
These normally consist of measuring shaft runout (TIR) and checking for a soft foot.
Although overall vibration readings can be used as baseline data, Fast Fourier Transform (FFT) spectra in all three planes at each bearing housing are preferred (see “Vibration Analysis” on page 22). Shaft proximity probes can be used to determine sleeve bearing motor baselines.
This tool can detect changes in the operating temperature of critical motor components, especially bearings.
Comparing factory terminal winding resistance and no-load amps with data taken under load can be useful when monitoring the condition of a new motor or troubleshooting system problems. Factory baselines are often available from the manufacturer or its website. The accuracy of factory data depends on how it was obtained, but it’s usually sufficient for field use.
Baseline data for a newly installed motor could reveal an error, e.g., misconnection for an incorrect voltage, and prevent a premature motor failure. Rather than simply “bumping” a motor for rotation before coupling it to the load, operate it long enough to measure the line current for all three phases, as well as the voltage and vibration levels.
QUICK TIP: Comparing the baselines of a failed motor and its replacement could reveal application- or process-related weaknesses in the system.
Repaired motor baselines
Service centers usually provide no-load and/or full-load (when stipulated) test data for repaired motors, including voltage, current, and vibration spectra. Comparing these results with historical baselines and those obtained on site when the motor is returned to service may confirm the quality of the repair or possibly reveal underlying system problems. For example, increased vibration levels in on-site tests might indicate a deteriorating motor base or a problem with the driven equipment rather than a balancing issue with the motor.
With newly repaired motors that have been in operation for many years, baseline comparisons are invaluable in root-cause failure analysis and may even expose consequential damage from certain kinds of failures, e.g., a broken shaft. To correctly identify cause and effect and prevent recurrences, always investigate equipment failure at the system level. MT
For details on using motor/system baselines, as well as expert advice on a wide range of other motor-related issues, download Getting the Most from Your Electric Motors, or contact a local EASA service center.