Archive | January

272

8:09 pm
April 29, 2009
Print Friendly

Going Wireless: Wireless Technology Is Ready For Industrial Use

Wireless works in a plant, but you’ll want to be careful regarding which “flavor” you choose

Wireless Technology now provides secure, reliable communication for remote field sites and applications where wires cannot be run for practical or economic reasons. For maintenance purposes, wireless can be used to acquire condition monitoring data from pumps and machines, effluent data from remote monitoring stations, or process data from an I/O system.

For example, a wireless system monitors a weather station and the flow of effluent leaving a chemical plant. The plant’s weather station is 1.5 miles from the main control room. It has a data logger that reads inputs from an anemometer to measure wind speed and direction, a temperature gauge and a humidity gauge. The data logger connects to a wireless remote radio frequency (RF) transmitter module, which broadcasts a 900MHz, frequency hopping spread spectrum (FHSS) signal via a YAGI directional antenna installed at the top of a tall boom located beside the weather station building. This posed no problem.

However, the effluent monitoring station was thought to be impossible to connect via wireless. Although the distance from this monitoring station to the control room is only one-quarter mile, the RF signal had to pass through a four-story boiler building. Nevertheless, the application was tested before installation, and it worked perfectly. The lesson here is that wireless works in places where you might think it can’t. All you have to do is test it.

There are many flavors of wireless, and an understanding is needed to determine the best solution for any particular application.Wireless can be licensed or unlicensed, Ethernet or serial interface, narrow band or spread spectrum, secure or open protocol,Wi-fi…the list goes on. This article provides an introduction to this powerful technology.

The radio spectrum
The range of approximately 9 kilohertz (kHz) to gigahertz (GHz) can be used to broadcast wireless communications. Frequencies higher than these are part of the infrared spectrum, light spectrum, X-rays, etc. Since the RF spectrum is a limited resource used by television, radio, cellular telephones and other wireless devices, the spectrum is allocated by government agencies that regulate what portion of the spectrum may be used for specific types of communication or broadcast.

In the United States, the Federal Communications Commission (FCC) governs the allocation of frequencies to non-government users. FCC has limited the use of Industrial, Scientific, and Medical (ISM) equipment to operate in the 902-928MHz, 2400-2483.5MHz and 5725-5875MHz bands,with limitations on signal strength, power, and other radio transmission parameters. These bands are known as unlicensed bands, and can be used freely within FCC guidelines. Other bands in the spectrum can be used with the grant of a license from the FCC. (Editor’s Note: For a quick definition of the various bands in the RF spectrum, as well as their uses, log on to: http://encyclopedia.thefreedictionary. com/radio+frequency )

Licensed or unlicensed
A license granted by the FCC is needed to operate in a licensed frequency. Ideally, these frequencies are interference-free, and legal recourse is available if there is interference. The drawbacks are a complicated and lengthy procedure in obtaining a license, not having the ability to purchase off-the-shelf radios since they must be manufactured per the licensed frequency, and, of course, the costs of obtaining and maintaining the license.

goingwireless2

License-free implies the use of one of the frequencies the FCC has set aside for open use without needing to register or authorize them. Based on where the system will be located, there are limitations on the maximum transmission power. For example, in the U.S., in the 900MHz band, the maximum power may be 1 Watt or 4 Watts EIRP (Effective Isotropic Radiated Power).

The advantages of using unlicensed frequencies are clear: no cost, time or hassle in obtaining licenses; many manufacturers and suppliers who serve this market; and lower startup costs, because a license is not needed. The drawback lies in the idea that since these are unlicensed bands, they can be “crowded” and, therefore, may lead to interference and loss of transmission. That‘s where spread spectrum comes in. Spread spectrum radios deal with interference very effectively and perform well, even in the presence of RF noise.

Spread spectrum systems
Spread Spectrum is a method of spreading the RF signal across a wide band of frequencies at low power, versus concentrating the power in a single frequency as is done in narrowband channel transmission. Narrowband refers to a signal which occupies only a small section of the RF spectrum, whereas wideband or broadband signal occupies a larger section of the RF spectrum. The two most common forms of spread spectrum radio are frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS). Most unlicensed radios on the market are spread spectrum.

As the name implies, frequency hopping changes the frequency of the transmission at regular intervals of time. The advantage of frequency hopping is obvious: since the transmitter changes the frequency at which it is broadcasting the message so often, only a receiver programmed with the same algorithm would be able to listen and follow the message. The receiver must be set to the same pseudo-random hopping pattern, and listen for the sender’s message at precisely the correct time at the correct frequency. Fig. 1 shows how the frequency of the signal changes with time. Each frequency hop is equal in power and dwell time (the length of time to stay on one channel). Fig. 2 shows a two dimensional representation of frequency hopping, showing that the frequency of the radio changes for each period of time. The hop pattern is based on a pseudo random sequence.

goingwireless3

DSSS combines the data signal with a higher data-rate bit-sequence-also known as a ‘chipping code’-thereby “spreading” the signal over greater bandwidth. In other words, the signal is multiplied by a noise signal generated through a pseudo-random sequence of 1 and -1 bits. The receiver then multiplies the signal by the same noise to arrive at the original message (since 1 x 1 = 1 and -1 x -1 = 1).

When the signal is “spread,” the transmission power of the original narrowband signal is distributed over the wider bandwidth, thereby decreasing the power at any one particular frequency (also referred to as low power density). Fig. 3 shows the signal over a narrow part of the RF spectrum. In Fig. 4, that signal has been spread over a larger part of the spectrum, keeping the overall energy the same, but decreasing the energy per frequency. Since spreading the signal reduces the power in any one part of the spectrum, the signal can appear as noise. The receiver must recognize this signal and demodulate it to arrive at the original signal without the added chipping code. FHSS and DSSS both have their place in industry and can both be the “better” technology based on the application. Rather than debating which is better, it is more important to understand the differences, and then select the best fit for the application. In general, a decision involves:

  • Throughput
  • Colocation
  • Interference
  • Distance
  • Security

Throughput
Throughput is the average amount of data communicated in the system every second. This is probably the first decision factor in most cases. DSSS has a much higher throughput than FHSS because of a much more efficient use of its bandwidth and employing a much larger section of the bandwidth for each transmission. In most industrial remote I/O applications, the throughput of FHSS is not a problem.

As the size of the network changes or the data rate increases, this may become a greater consideration. Most FHSS radios offer a throughput of 50-115 kbps for Ethernet radios.Most DSSS radios offer a throughput of 1-10 Mbps. Although DSSS radios have a higher throughput than FHSS radios, one would be hard pressed to find any DSSS radios that serve the security and distance needs of the industrial process control and SCADA market. Unlike FHSS radios, which operate over 26MHz of the spectrum in the 900MHz band (902-928MHz), and DSSS radios, which operate over 22MHz of the 2.4GHz band, licensed narrow band radios are limited to 12.5kHz of the spectrum.Naturally, as the width of the spectrum is limited, the bandwidth and throughput will be limited as well.Most licensed frequency narrowband radios offer a throughput of 6400 to 19200 bps.

Collocation
Collocation refers to having multiple independent RF systems located in the same vicinity. DSSS does not allow for a high number of radio networks to operate in close proximity as they are spreading the signal across the same range of frequencies. For example, within the 2.4GHz ISM band, DSSS allows only three collocated channels. Each DSSS transmission is spread over 22MHz of the spectrum, which allows only three sets of radios to operate without overlapping frequencies.

FHSS, on the other hand, allows for multiple networks to use the same band because of different hopping patterns. Hopping patterns which use different frequencies at different times over the same bandwidth are called orthogonal patterns. FHSS uses orthogonal hopping routines to have multiple radio networks in the same vicinity without causing interference with each other. That is a huge plus when designing large networks, and needing to separate one communication network from another. Many lab studies show that up to 15 FHSS networks may be collocated, whereas only 3 DSSS networks may be collocated. Narrowband radios obviously cannot be collocated as they operate on the same 12.5MHz of the spectrum.

Interference
Interference is RF noise in the vicinity and in the same part of the RF spectrum. A combining of the two signals can generate a new RF wave or can cause losses or cancellation in the intended signal. Spread Spectrum in general is known to tolerate interference very well, although there is a difference in how the different flavors handle it.When a DSSS goingwireless4receiver finds narrowband signal interference, it multiplies the received signal by the chipping code to retrieve the original message. This causes the original signal to appear as a strong narrow band; the interference gets spread as a low power wideband signal and appears as noise, and thus can be ignored.

In essence, the very thing that makes DSSS radios spread the signal to below the noise floor is the same thing that allows DSSS radios to ignore narrowband interference when demodulating a signal. Therefore, DSSS is known to tolerate interference very well, but it is prone to fail when the interference is at a higher total transmission power, and the demodulation effect does not drop the interfering signal below the power level of the original signal.

Given that FHSS operates over 83.5MHz of the spectrum in the 2.4GHz band, producing high power signals at particular frequencies (equivalent to having many short synchronized bursts of narrowband signal) it will avoid interference as long as it is not on the same frequency as the narrowband interferer.Narrowband interference will, at most, block a few hops which the system can compensate for by moving the message to a different frequency. Also, the FCC rules require a minimum separation of frequency in consecutive hops, and therefore the chance of a narrowband signal interfering in consecutive hops is minimized.

When it comes to wideband interference, DSSS is not so robust. Since DSSS spreads its signal out over 22MHz of the spectrum all at once at a much lower power, if that 22MHz of the spectrum is blocked by noise or a higher power signal, it can block 100% of the DSSS transmission, although it will only block 25% of the FHSS transmission. In this scenario, FHSS will lose some efficiency, but not be a total loss.

In licensed radios the bandwidth is narrow, so a slight interference in the range can completely jam transmission. In this case, highly directional antennas and band pass filters may be used to allow for uninterrupted communication, or legal action may be pursued against the interferer.

802.11 radios are more prone to interference since there are so many readily available devices in this band. Ever notice how your microwave interferes with your cordless phone at home? They both operate in the 2.4GHz range, the same as the rest of 802.11 devices. Security becomes a greater concern with these radios.

If the intended receiver of a transmitter is located closer to other transmitters and farther from its own partner, it is known as a Near/Far problem. The nearby transmitters can potentially drown the receiver in foreign signals with high power levels. Most DSSS systems would fail completely in this scenario. The same scenario in a FHSS system would cause some hops to be blocked but would maintain the integrity of the system. In a licensed radio system, it would depend on the frequency of the foreign signals. If they were on the same or close frequency, it would drown the intended signal, but there would be recourse for action against the offender unless they have a license as well.

Distance
Distance is closely related to link connectivity, or the strength of an RF link between a transmitter and a receiver, and at what distance they can maintain a robust link. Given that the power level is the same, and the modulation technique is the same, a 900MHz radio will have higher link connectivity than a 2.4GHz radio. As the frequency in the RF spectrum increases, the transmission distance decreases if all other factors remain the same. The ability to penetrate walls and object also decreases as the frequency increases.Higher frequencies in the spectrum tend to display reflective properties. For example, a 2.4GHz RF wave can bounce off reflective walls of buildings and tunnels. Based on the application, this can be used as an advantage to take the signal farther, or it may be a disadvantage causing multipath, or no path, because the signal is bouncing back.

FCC limits the output power on spread spectrum radios. DSSS consistently transmits at a low power, as discussed above, and stays within the FCC regulation by doing so. This limits the distance of transmission for DSSS radios, and thus this may be a limitation for many of the industrial applications. FHSS radios, on the other hand, transmit at high power on particular frequencies within the hopping sequence, but the average power on the spectrum is low, and therefore can meet with the regulations. Since the actual signal is transmitting at a much higher power than the DSSS, it can travel further.Most FHSS radios are capable of transmitting over 15 miles, and longer distances with higher gain antennas.

802.11 radios, although available in both DSSS as well as FHSS, have a high bandwidth and data rate, up to 54Mbps (at the time of this publication). But it is important to note that this throughput is for very short distances, and downgrades very quickly as the distance between the radio modems increases. For example, a distance of 300 feet would drop the 54Mbps rate down to 2Mbps. This makes this radio ideal for a small office or home application, but not for many industrial applications where there is a need to transmit data over several miles.

Since narrowband radios tend to be a lower frequency, they are a good choice in applications where FHSS radios cannot provide adequate distance. A proper application for narrow band licensed radios is when there is a need to use a lower frequency to either travel over a greater distance, or be able to follow the curvature of the earth more closely and provide link connectivity in areas where line of sight is hard to achieve.

Security
Since DSSS signals run at such low power, the signals are difficult to detect by intruders. One strong feature of DSSS is its ability to decrease the energy in the signal by spreading the energy of the original narrowband signal over a larger bandwidth, thereby decreasing the power spectral density. In essence, this can bring the signal level below the noise floor, thereby making the signal “invisible” to would-be intruders. On the same note, however, if the chipping code is known or is very short, then it is much easier to detect the DSSS transmission and retrieve the signal since it has a limited number of carrier frequencies. Many DSSS systems offer encryption as a security feature, although this increases the cost of the system and lowers the performance, because of the processing power and transmission overhead for encoding the message.

For an intruder to successfully tune into a FHSS system, he needs to know the frequencies used, the hopping sequence, the dwell time and any included encryption. Given that for the 2.4GHz band the maximum dwell time is 400ms over 75 channels, it is almost impossible to detect and follow a FHSS signal if the receiver is not configured with the same hopping sequence, etc. In addition, most FHSS systems today come with high security features such as dynamic key encryption and CRC error bit checking.

Today,Wireless Local Area Networks (WLAN) are becoming increasingly popular. Many of these networks use the 802.11 standard, an open protocol developed by IEEE.Wi-fiis a standard logo used by the Wireless Ethernet Compatibility Alliance (WECA) to certify 802.11 products. Although industrial FHSS radios tend to not be Wi-fi, and therefore not compatible with these WLANs, there may be a good chance for interference due to them operating in the same bandwidth. Since most Wi-fiproducts operate in the 2.4 or 5GHz bands, it may be a good idea to stick with a 900MHz radio in industrial applications, if the governing body allows this range (Europe allows only 2.4GHz, not 900MHz). This will also provide an added security measure against RF sniffers (a tool used by hackers) in the more popular 2.4 band.

Security is one of the top issues discussed in the wireless technology sector. Recent articles about “drive-by hackers” have left present and potential consumers of wireless technology wary of possible infiltrations. Consumers must understand that 802.11 standards are open standards and can be easier to hack than many of the industrial proprietary radio systems.

The confusion about security stems from a lack of understanding of the different types of wireless technology. Today, Wi-fi(802.11a, b, and g) seems to be the technology of choice for many applications in the IT world, homes and small offices. 802.11 is an open standard in which many vendors, customers and hackers have access to the standard.While many of these systems have the ability to use encryption like AES and WEP, many users forget or neglect to enable these safeguards which would make their systems more secure.Moreover, features like MAC filtering can also be used to prevent unauthorized access by intruders on the network. Nonetheless, many industrial end users are very wary about sending industrial control information over standards that are totally “open.”

So, how do users of wireless technology protect themselves from infiltrators? One almost certain way is to use non- 802.11 devices that employ proprietary protocols that protect networks from intruders. Frequency hopping spread spectrum radios have an inherent security feature built into them. First, only the radios on the network that are programmed with the “hop pattern” algorithm can see the data. Second, the proprietary, non-standard, encryption method of the closed radio system will further prevent any intruder from being able to decipher that data.

The idea that a licensed frequency network is more secure may be misleading. As long as the frequency is known, anyone can dial into the frequency, and as long as they can hack into the password and encryption, they are in. The added security benefits that were available in spread spectrum are gone since licensed frequencies operate in narrowband. Frequency hopping spread spectrum is by far the safest, most secure form of wireless technology available today.

Mesh radio networks
Mesh radio is based on the concept of every radio in a network having peer-topeer capability. Mesh networking is becoming popular since its communication path has the ability to be quite dynamic. Like the worldwide Web, mesh nodes make and monitor multiple paths to the same destination to ensure that there is always a backup communication path for the data packets.

There are many concerns that developers of mesh technology are still trying to address, such as latency and throughput. The concept of mesh is not new. The internet and phone service are excellent mesh networks based in a wired world. Each node can initiate communication with another node and exchange information.

0107_goingwireless_img8Summary
In conclusion, the choice of radio technology to use should be based on the needs of the application. For most industrial process control applications, proprietary protocol license-free frequency hopping spread spectrum radios (Fig. 5) are the best choice because of lower cost and higher security capabilities in comparison to licensed radios.When distances are too great for a strong link between FHSS radios with repeaters, then licensed narrowband radios should be considered for better link connectivity. The cost of licensing may offset the cost of installing extra repeaters in a FHSS system.

As more more industrial applications require greater throughput, networks employing DSSS that enable TCP/IP and other open Ethernet packets to pass at higher data rates will be implemented. This is a very good solution where PLCs (Programmable Logic Controllers), DCS (Distributed Control Systems) and PCS (Process Control Systems) need to share large amounts of data with one another or upper level systems like MES (Manufacturing Execution Systems) and ERP (Enterprise Resource Planning) systems.

When considering a wireless installation, check with a company offering site surveys that allow you to install radios at remote locations to test connectivity and throughput capability. Often this is the only way to ensure that the proposed network architecture will satisfy your application requirements. These demo radios also let you look at the noise floor of the plant area, signal strength, packet success rate and the ability to identify if there are any segments of the license free bandwidth that are currently too crowded for effective communication throughput. If this is the case, then hop patterns can be programmed that jump around that noisy area instead of through it. MT


Gary Mathur is an applications engineer with Moore Industries-International, in North Hills, CA. He holds Bachelor’s and Masters degrees in Electronics Engineering from Agra University, and worked for 12 years with Emerson Process Management before joining Moore. For more information on the products referenced in this article, telephone: (818) 894-7111; e-mail: GMathur@miinet.com

Continue Reading →

336

6:00 am
January 1, 2007
Print Friendly

Problem Solvers: Conveyor Idler Bearing Isolator Increases Reliability And Safety While Reducing Downtime And Power Consumption

probsolvers_inproInpro/Seal Company has announced its new Belt Conveyor Idler/Roller Bearing Isolator. The result of direct customer request, input and feedback, continuous R&D and extensive field testing and trials, this product was designed to increase productivity, save energy and increase safety in coal mining, ore mining, aggregate and related applications that use belt conveyors in bulk material handling applications. Before the advent of the Belt Conveyor Idler/Roller Bearing Isolator, users had to deal with outdated sealing methods, in particular elastomeric seals to protect idler bearings. Small, spring loaded, contact seal, elastomerics are tiny plastic devices that make contact and rub on the exterior of the idler roll while operating. Elastomeric seals are widely used because they are cheap and because there has not been anything better available–until now. As a contact seal is prone to failure and needs constant maintenance, the entire bearing protection system is somewhat precarious. And when an elastomeric seal quits working, undesirable things happen, much of it without warning.

A huge industry problem
Belt conveyors are in service, around the world, working 24/7 to “trough the belt” or transport bulk materials in coal mining, ore mining, aggregate, hard quarry and related applications including; concrete, asphalt, fertilizer, salt, recycling, wood, pulp and paper, electric utility, grain, construction, agricultural, steel and general industrial. These belts are typically supported by three conveyor rollers, or idlers, positioned at intervals as close as three linear feet. One roller is horizontal and other two are positioned on either side, at an angle necessary to carry the burden. Depending on the specific application, they operate above and under ground and may extend for many miles over mountainous terrain, roads and streams. There may be as many as 10,500 bearings and bearing protection devices on the conveyor rollers per mile of run. In the mining industry, it’s estimated that each site has 3-4 miles of conveyor with idlers strung out the entire length of the belt.

When an idler fails, it is most likely the result of bearing damage caused by contaminants (dust or moisture) entering the bearing environment. Chances are the plastic has failed by wearing out and has grooved the shaft or has burned to a crisp at the point of contact. Once an elastomeric seal fails, contaminants are drawn into the housing, where they condense and contaminate the lubricant and cause the bearings to fail. The end result is a seized roll, belt damage or worse. The idler can burst open, and if it does, metal-on-metal contact can cause a fire. To counter this, most mining operations employ greasers that work around the clock trying to keep idler bearings lubricated in an effort to make contact seals work. But, because lip seals carry a 100% failure rate, eventually users will have to deal with catastrophic belt failure no matter what they try.

A welcome solution Inpro’s Belt Conveyor Idler Roller Bearing Isolator is custom-engineered to suit individual applications. It is easy to install because it conforms to existing clearances, housings and bearing patterns. It can be retrofitted to any existing manufacturer’s top side and return frame assemblies in any belt width or troughing angle for any brand of conveyor. It is available in any idler configuration, including: CEMA B, C,D and will fit any idler type including: transition, impact, troughed, training, return belt, flat carrier, impact, rubber cushion return, self aligning, self aligning return, offset center roll, picking and feeding, unequal length troughed, wire rope, wire rope return, low profile, “V” return idler, variable trough, rubber disc, ceramic, two, three and five roll garland, live shaft and side guide conveyor idlers.

Inpro/Seal Company
Rock Island, IL

Split Shaft Seals Curb V.O.C. Emissions

probsolvers_woodexOriginal MECO® custom shaft seals, made byWoodex Bearing, have proven effective in containing V.O.C. (volatile organic compound) vapors from rotating reactor, dryer, extractor and conveyor driveshafts, resulting in local solvent concentrations of 100 PPM and less. MECO’s patented seal designs are custom-engineered to accommodate diametric shaft run-out of 6mm and more and still hold vacuum. Some models can operate at high temperature without a purge or flush line. Fully split models can be installed on existing machinery with minimal downtime. Seal performance is reliable, with long run-time between rebuilds, even in applications with bent or misaligned shafts. Seal maintenance can be predicted far in advance. These seals are used on rotating equipment in the dry powder and bulk processing industries. FDA-approved materials are available.

Woodex Bearing Co., Inc.
Georgetown, ME

 

Repairs, Rebuilds, Upgrades

probsolvers3The Stock Perpetual Motion after-sales customer care program can keep you running during critical load periods.New in the U.S., it comes standard with the purchase of a Stock Bulk Material Handling (BMH) product, offering 24/7 support for companies wishing to improve existing plant performance without complete equipment renewal. It starts with a technical assessment of your bulk handling equipment and subsequent status report. From this evaluation, Stock can suggest upgrade or rebuild recommendations that improve performance and enhance operation without the high cost or lost time associated with complete replacements. Stock can perform the recommended service and provide guarantees on reliability and performance. Perpetual Motion can be built into a tailored contract package, allowing customers to upgrade a plant at scheduled intervals and as part of an ongoing program of services. These services typically include routine maintenance, lubrication and call out. Stock

Schenck Process Group
Cleves, OH

 

Meet All Sanitary Regs With This Easily Serviced/Cleaned Powder & Bulk Conveyor

probsolvers_4Hapman has added the Series 600 (6” diameter) Helix™ flexible screw conveyor to its versatile Hi/LO tilting base conveyor line. This original Hapman-design has the added option of a ribbon-style agitator to assure proper size reduction of material for consistent conveying. As with other Hapman Hi/Lo units, it can be quickly moved from location to location and is easily serviced/cleaned in its lowered position. Finishes can be standard industrial, food grade, or 3A Dairy.Hapman is currently the only manufacturer of flexible screw conveyors that is able to provide this USDA Equipment Acceptance Certificate.

Hapman
Kalamazoo, MI

 

“No-More-Lube” Chain Technology

probsolvers5According to its manufacturer, the Renold Syno line sets a new benchmark for chain performance with little or no lubrication. Covering both small and large pitch sizes, this technology has been tailored into three different products that carry the Syno name. They include:

  • Nickel-Plated for hygiene-sensitive applications where lubricant contamination must be avoided
  • Stainless Steel as an option when the application requires enhanced levels of corrosion resistance
  • Polymer Bush to tackle serious wear and fatigue associated with higher-load, heavier-duty jobs

Renold Jeffrey
Morristown, TN

 

“Like-New” Separators With Genuine Parts And Rebuild Services

probsolvers6Operators of ROTEX® separation equipment can enjoy peace of mind in knowing that maintenance of this equipment is fully supported by the ROTEX Parts & Service group.Whether customers need a single replacement part or an entire refurbishing and upgrade of any ROTEX machine to like-new condition, the Parts & Service group has the expertise and the inventory to keep the company’s customers covered. According to a ROTEX spokesman, the company will completely disassemble the screener, replace worn parts and install the latest technology so that when it sends a piece of equipment back to the customer, it’s the same as a new machine. That’s quite a cost-efficient alternative to purchasing new equipment. The ROTEX line of innovative separation equipment includes Gyratory and Vibratory Screeners and Sifters for Dry Applications, Liquid-Solid Separators for Wet Applications, Automated Particle Size Analyzers and Vibratory Feeders and Conveyors.

 

Continue Reading →

185

6:00 am
January 1, 2007
Print Friendly

Beyond Milestones

bill_kieselWith this January 2007 issue, MAINTENANCE TECHNOLOGY reaches an important milestone: completion of two decades of publication as the premier magazine for the plant equipment reliability, maintenance and asset management community. One of the key reasons for our success during these 20 years has been industry’s growing recognition that it is the maintenance function and YOU, the hardworking professionals involved in it, that keep plants and facilities across all market segments up and running.

Unlike other publications in the industry, MAINTENANCE TECHNOLOGY is dedicated to serving that large- but very select-audience of managers and supervisors who are responsible for ensuring the reliability and availability of their organizations’ systems 24/7.We are the leader in this very important market because we are the ONLY publication focused 100% on this industry and its never-ending quest for world-class maintenance status.

January, as the cliché goes, is a time of reflection and of renewed commitments for improvement.Around here, though, we don’t stop with January. Throughout the coming year, we will continue to reflect on what we have been doing for you over the past 20 years, and, more importantly, how we can be serving you better in the future.

MAINTENANCE TECHNOLOGY is your publication. From the beginning, it was designed to help you successfully address the many challenges that complex industrial environments throw at you on a daily basis. And, just as you are held responsible for the efficient running of your plants and facilities, we expect you to hold us responsible for delivering the type of information that helps you to do your jobs better…faster…more cost-effectively…

Year after year, MAINTENANCE TECHNOLOGY has strived to be far more than just another trade journal that lands on your desk each month. We have sought to be valuable partners with you and your company, helping you to wade through and understand the countless technologies and strategies- both available and emerging-that can help make your job easier and your operations more reliable and profitable. Our mission hasn’t changed over the years…but,we like to think that we’ve grown far stronger in our pursuit of it.

Of course, we couldn’t be where we are today without YOU, our loyal readers and advertising partners. Thank you so much for your past support.We look forward to working with you over the next 20 years!

Happy New Year! MT

0109-publishersbill-sig1

Continue Reading →

155

6:00 am
January 1, 2007
Print Friendly

The Most Productive Nation

bob_williamson

Bob Williamson, Contributing Editor

What should we wish for in 2007? Cutting operating costs has been at the top of the business and industry wishlist for over 30 years…

Sometimes the cost-cutting bell gets rung louder than others. It all depends, some say, on Wall Street investors, stockholders, executive decisions, the marketplace, competition, return on investment, global economic changes and/or currency exchange rates. Then, in prosperous times, the cost-cutting bell is silenced. Should we wish for more of the same?

The United States remains the most productive nation in the world, and U.S. manufacturing has remained the most productive in the world since before 1960! Despite what the media says, despite politicians’ interpretations, despite what some may think, we are a model of economic stamina, whether measured by Real GDP (Gross Domestic Product) per capita, or Real GDP per employed person. The top 10 Real GDP per capita in 2005: U.S., Norway, Denmark, Netherlands, Canada, Austria, U.K., Belgium, Sweden, Australia. Manufacturing, not service industries, is one of the sources of ‘original wealth” (along with mining and agriculture). Should we wish to remain the most productive nation in the world? If so, we have serious work to do…and we already know how to do it!

Good news continues to be reflected in this year’s productivity trends: U.S. manufacturing Unit Labor Costs (ULC) fell 8.3% in the second quarter and 4.1% in the third quarter of 2006 (ULC = average labor compensation per unit of output). Productivity improvement measures, including advanced manufacturing methods, workplace innovation, favorable currency exchange rates, and (I believe) our maintenance and reliability improvements continue to sustain America’s competitive edge.

Low-wage countries continue to attract the attention of some manufacturers. However, these countries (China, India, Mexico, Turkey, Czech Republic, Hungary and Poland) also have extremely low productivity levels. This is where the Unit Labor Cost comes in-a true measure of economic productivity. For example, wages are considerably lower in China and India (only 2% to 3% of U.S. wages). But productivity is also significantly lower in China and India (12% to 13% of U.S. productivity). That means considerably MORE labor hours are required to make the same output in China and India than in the U.S. Still, China’s and India’s Unit Labor Costs are lower than those of the U.S.-but only 20% lower, on average. And, 20% isn’t that much when you calculate the true ‘costs” of importing goods from Asia. These include actual transportation, in-transit damage, un-returnable defective products, long lead times for changes and order quantities and high inventory levels that have to be maintained here, not to mention the risk of dealing with a country (China) that doesn’t recognize proprietary information, patents, trademarks or copyright protections.

China and India, among others, will continue to be formidable consumers and competitors in the global market. Twenty-eight percent (28%) of all of the world’s jobs are in China and 15% are in India. As their standards of living increase, so will their cost of living and their employee compensation. In China, for example, average hourly compensation in manufacturing jobs rose 8.8% from 2002 to 2003, and another 8.1% from 2003 to 2004. To retain their lower ULC, China and India must employ increasingly more advanced manufacturing technologies, methods and innovations along with their economic and environmental reform policies.Advanced manufacturing requires increasing levels of skilled and highly-skilled workers and technicians, which also brings higher compensation levels.As noted in previous columns and articles, developing and attracting higher-skilled workers will continue to be an escalating worldwide problem.

Our challenge for 2007 and beyond is to keep our productivity levels high and our operating costs down as we enter a 19-year era of drastic workforce demographic changes. We must dramatically improve the education levels of our workforce to facilitate error-free operations plus accelerate our ability to rapidly innovate and improve our infrastructure, facilities, manufacturing, transportation and utilities. Our business and government leaders, schools and families all play a role in retaining, and improving our competitive advantage. Look what’s happened over the past 30 years: Vocational/technical school programs have declined, as have skilled trades apprenticeship programs. Many manufacturing and maintenance jobs have lost their luster, despite relatively high wages. Changes in taxes, insurance, health care, permits and liability litigation have increased costs. The cost of procuring and transporting raw materials and finished goods has skyrocketed. Outsourcing and off-shoring, once thought to be “the answers” to our industrial woes, may not always be the best path to a long-term, viable economy. These strategies often just turn out to be “quick fixes” with long-term consequences.

My wish for 2007? Let’s all do our part to improve our Nation’s success by building a solid foundation based on an educated, motivated, innovative workforce. Let’s make our critical equipment, infrastructure and facilities the most reliable and best-maintained and our standard of living and productivity the highest in the world. Here’s wishing all of our faithful readers a very happy and prosperous New Year!

bwilliamson@atpnetwork.com

AUTHOR’S NOTE: The facts and statistics for this article were obtained from The Conference Board Report (October 2006); The Conference Board via Newswire (June 01, 2004); USDOL, Bureau of Labor Statistics News (Nov. 30, 2006 & Dec. 5, 2006); and the USDOL, BLS, Office of Productivity & Technology report: “Comparative Real GDP Per Capita and Per Person Fifteen Countries 1960-2005.”

Continue Reading →

167

6:00 am
January 1, 2007
Print Friendly

Asset Intelligence Goes Beyond Basic Condition Monitoring

With new and increasingly more powerful on-line equipment diagnostic tools becoming available every year, process manufacturing industries now have the opportunity to integrate this criticalequipment condition information into their asset management strategies. These strategies can support more business- driven approaches aimed at improving overall financial performance. Much work still needs to be done, however.

Until now (in process manufacturing operations at least…), the focus has been on relatively limited and specific diagnostic monitoring of intelligent field devices and large rotating equipment. This is due largely to the widespread availability of highly capable, fieldbus-enabled condition monitoring tools, such as vibration, temperature and pressure monitoring and fluid analysis, all of which can be integrated into the control system strategy to react to critical changes in the readings.

But, within an overall asset management strategy, it’s important that real-time condition monitoring practices go beyond intelligent field devices and large rotating equipment to encompass all plant production assets. These should include all sensors and actuators (regardless of the vendor); rotating and non-rotating equipment, such as pumps, motors, compressors, turbines, mixers, dryers and heat exchangers; even entire process units.

The real goal is to move to predictive and proactive decision-making based on developing trends versus our current reactionary approach. This means that large (and often overwhelming) amounts of real-time diagnostic data now available must be collected, aggregated and analyzed, then put into proper context and made available to other plant and enterprise systems. In addition, we need to manage and control the resulting actions to manage risk and support our continuous improvement efforts, bringing together Maintenance, Operations and Engineering. By pulling these three aspects together—collection, analysis and action—we move from condition monitoring to “condition management” based on real-time asset intelligence.

The key lies in developing a knowledge management capability that captures the expertise of today’s highly experienced operators, engineers and maintenance technicians. While this capability is important today, it will become even more critical in the future as our industrial plants struggle to maintain current levels of asset utilization and availability with an ever-shrinking pool of skilled and knowledgeable personnel due to an aging workforce and retirement of many of our most experienced people.

By combining this knowledge with an integrated view of the entire operation from both the business and operations perspectives, we can move to an environment where more informed decisions can be made in a more timely fashion. From this base,we will be well-positioned to manage the risks inherent in the process industries (i.e., health and safety, regulatory, financial and environmental) while delivering improved business performance and shareholder value. MT

Continue Reading →

281

6:00 am
January 1, 2007
Print Friendly

The Maintenance/Production Partnership: Part II

kbanister

Ken Bannister, Contributing Editor

Role definition is crucial if both Maintenance and Production departments are to strike an accord and work in an autonomous, yet cohesive manner to deliver a high-quality product in a waste-free, cost-effective manner. Virtually every major management philosophy and methodology in practice today recognizes and fosters the integral relationship between the Maintenance and Production departments. Zero inventories-based Just In Time (JIT) and Lean-manufacturing methods would not be possible without high levels of equipment reliability and availability, driven by active operator involvement in the maintenance process.

Autonomous operator-based maintenance is foundational to the Total Productive Maintenance (TPM) philosophy, and is a cornerstone of the Reliability Centered Maintenance (RCM) methodology, both of which heavily utilize operator input to design, implement and continuously improve equipment maintenance reliability strategies. Increasing reliability and throughput requires Maintenance and Production to work together on a two-pronged management and hourly workforce level.

Operator-based maintenance
Operator-based maintenance can be implemented through the following three-step approach designed to promote confidence in both parties:

Step 1: Commence with a revised work acceptance procedure.Whenever Production calls in a machine problem, guide the caller(s) to disclose their name, the machine #/description, location, area of the problem (component or system) and a primary sense STILL (Smell, Touch, Intuition, Look, Listen) analysis of what the problem is believed to be.Operators instinctively know when their equipment is not running in the “sweet spot,” but they are rarely asked for their opinion(s). This step simplifies and speeds up the pre-planning process and allows the scheduler to more accurately dispatch the correct resources the first time.

Step 2: Allow and encourage operators to be part of the testing, start-up and acceptance after repair completion.

Step 3: Introduce Reliability Centered Maintenance (RCM). Choose a suitable RCM pilot and always include the relevant equipment operator and supervisor as part of the RCM analysis team when performing the FMEA analysis and condition-based maintenance work tasks. Use a perimeter-based maintenance approach in which the equipment is set up for rudimentary preventive and condition

monitoring checks while running. These checks can include temperature, flow, throughput, fill level, pressure and filter cleanliness-set up in an interactive “Go/No Go” style that lends itself perfectly to a regular operator check. This type of “Go/No Go” check only requires paperwork in the form of a work request when a “No Go” state is in effect.

Take, for example, a pre-RCM PM work order that might have instructed a maintainer to check and record all gauge pressures. This would not just be a waste of maintenance resources-the maintainer also would have to know the upper and lower safe operating window (SOW) limit for every gauge if a situation were to be immediately averted.

Recording every good pressure in the CMMS history also is meaningless and a waste of resources when it comes to input of the data. Marking each gauge with the SOW allows any person viewing the instrument to tell if the needle is in the safe or “Go” position between the lines, in which case no further action is required or taken. If, however, the needle is outside the SOW mark lines, or in a “No Go” state, the operator contacts the supervisor who immediately raises a work request for Maintenance to attend the pending situation. Because of the RCM FMEA analysis, Maintenance knows right away what the problem root cause could be and activates a planned work order in response to the event condition

RCM, which advocates autonomous maintenance work by operators (Total Productive Maintenance – TPM), is a perfect catalyst in building and cementing autonomous operator maintenance as a first-level maintenance approach, wherein the operator becomes the true machine guardian on a daily basis. Once a comfortable maintainer/operator working relationship is established, more complex PM-styled tasks, such as lubrication and filter changeouts, can be engineered into the operator-based maintenance program. In Fig. 1, operator-based maintenance is shown dovetailing into the core element of the maintenance process.

maint-prod_pyramid

Maintenance/production management alignment
Aligning the Maintenance and Production management teams to work in partnership is achieved through communication and an understanding of each other’s goals and objectives. In the process, the parties work collaboratively in the planning and scheduling of the production equipment uptime and downtime activities.

As both departments own the equipment in different ways, both compete for “alone” time with the equipment. Unfortunately, if both agendas are not harmonized, the equipment will suffer and both departments will lose.

The interactive input/output information required of both departments in order to prepare and schedule weekly forecasts and daily work schedules effectively is depicted in Fig. 2. In both cases, monthly and weekly schedule forecasts are being built on an ongoing basis, and being used as “best guesstimates” for assessing and managing resource requirements. From these forecasts come the daily schedules that are usually 70% to 95% accurate–and which should be just flexible enough to allow for minor unforeseen changes. To synchronize these daily schedules, both Maintenance and Production must agree, through the RCM process, what point in an asset’s condition dictates an uncontested responsive event in which both the Maintenance and Production planning and scheduling departments will work together in the asset’s interest alone.

maint-prod_combine

The Maintenance department can further assist the Production staff by providing a series of documents that include: a daily equipment condition report spelling out any triggered alarm conditions and found “No Go” exceptions that require planning and scheduling; a status report of unfinished or “carryover” work from a previous day or shift; a report-driven form with the fault codes marked on the work orders to show the percentage of non-maintenance-caused equipment failures (i.e., operator error, loading errors or jamming, overloading, etc.); and an equipment availability report. The Production department can further assist the Maintenance staff through the provision of a report detailing any pending product changeover or retooling event from which Maintenance can take the forced downtime opportunity to plan and schedule backlog or pending work on that equipment. Production will also assist Maintenance by providing reports on raw material problems, equipment incidents and any work requests. Getting together on a daily basis allows the information transfer and the setting of an almost fixed daily schedule. The product of this is equipment reliability and availability that translates directly into sustainable throughput and quality!

Ken Bannister is lead partner & principal consultant for Engtech Industries, Inc. Telephone: (519) 469-9173; e-mail: kbannister@engtechindustries.com

 

Continue Reading →

381

6:00 am
January 1, 2007
Print Friendly

Reducing Hot-Spot Temperatures in Transformers

In this real-world study from the power gen sector, researchers tested external oil coolers and ultra pure mineral oil to determine their effectiveness on hot spots, and, ultimately, equipment reliability

Over the past several years, Consumers Energy (“Consumers”) has come to rely strongly on external oil coolers to delay scheduled transformer capacity increases, or to cool transformers that experience marginal high top-oil temperatures. A transformer experiencing a top-oil temperature of 90 to 100 C or more would be a likely candidate for such an installation. These types of external coolers are installed in close proximity to the transformer using flexible hoses that are typically connected to existing 11/2″ taps near the top and bottom of the transformer.

Now that Consumers has acquired more than 20 oil coolers, questions frequently are being asked regarding the effectiveness of these units in actually limiting the loss of insulation life. Although the cooler reduces the oil temperature, there is a concern that it may be disrupting the natural convective oil flow inside the transformer and the hot-spot cooling effect may not be as great as expected or indicated by the top oil temperature.

Under normal conditions, the temperature gradient between the top and bottom of a transformer produces an internal oil circulation that acts to remove heat from the coils through convection. An external cooler can diminish this normal temperature gradient, resulting in reduced convective currents and, in theory, create pockets of stagnant oil and induce local overheating. To avoid this situation, some utilities have reportedly removed OEM-installed oil pumps from transformers where there has been no internally directed oil flow.

Equipment description
Study One…
transformerThe transformer selected for Study One was a unit being rewound for Consumers by Siemens Westinghouse of Hamilton, Ontario. This 5/6.25 MVA circular-core unit was originally manufactured by Allis Chalmers in 1952. Design changes by Siemens Westinghouse increased the OA rating to 6 MVA and the FA rating to 7.5 MVA. Six Luxtron fiber optic sensors were implanted near the top of the transformer’s secondary coils—two in each winding with one located between the first and second disk and one between the second and third disk. The sensors were installed as near to the mid-point of the disks as feasible and in contact with the copper conductor. These locations are thought to closely represent the transformer’s hot-spot location. All other temperatures recorded in this study were taken from standard thermocouples.

A 50 kW external oil cooler was obtained from Unifin of London, Ontario. This cooling unit consists of a 1 HP Cardinal pump, two 4.0 HP fans and a heat exchanger. The pump used by Unifin is designed for a variety of applications,with the desired oil flow for a given application achieved by throttling the flow with a valve on the discharge side of the pump.Nominally, this combination of components is rated by Unifin for a flow rate of 20 GPM, but the pump can produce a much higher flow, as was observed in this study.

Study Two…
The transformer selected for Study Two was a unit being rewound for Consumers by Ohio Transformer of Tallmadge, Ohio. This 5 MVA base circular-core transformer was originally manufactured by GE in 1963.

Six FISO fiber optic sensors, two per phase, were implanted in the coils of the transformer and a FISO Nortech-6 monitor was installed to record the readings. The hotspot locations were determined by the design team at Ohio Transformer, and the sensors were installed during the rewind process. All other temperatures recorded in this study were taken from standard thermocouples.

A 100 kW external oil cooler was obtained from SD Myers. This cooling unit consists of a 3 HP pump, 5.0 HP fans and a heat exchanger. The cooler is mounted on a portable trailer and includes hoses configured with check valves and quick connect fittings. The desired oil flow is achieved by throttling the flow with a valve on the discharge side of the pump.Nominally, this combination of components is rated by SD Myers for a flow rate of 50 GPM, with a capability of removing 340,000 BTU/hr.

An industry standard mineral oil and an ultra pure mineral oil manufactured by Petro-Canada with the trade name of Luminol were obtained from Ohio transformer. The transformer was first filled with standard mineral oil, tested, drained, refilled with Luminol, and then retested to obtain the efficiency comparison between the insulating oils used in combination with and without the external auxiliary oil cooler.

Study conditions and results

equipmentdesign1

Study One…
Heat runs were initially conducted on the Allis Chalmers transformer (which had undergone design changes and was being rewound by Siemens Westinghouse) at the OA and FA ratings and then at 150% of the FA rating, or 11.25 MVA.While still at the 11.25 MVA level, the oil cooler was connected and temperatures were recorded until temperature stabilization was achieved. The cooler’s oil flow rate maintained for the initial run was 45 GPM. The observed temperature differential between the cooler’s inlet and outlet was consistently about 10 C degrees.

One of the fiber optic sensors stopped working early in the first heat run. The instrument displaying the fiber optic temperatures is capable of displaying four readings at a time. The temperatures recorded were taken one each from the outside windings and two from the center phase winding.

The warmest hot-spot temperature recorded while loaded to 11.25 MVA, and without the cooler operational, was 112 C on the center phase winding.When temperature stabilization was reached after the cooler was operational, this temperature had been reduced to 100 C. The magnitude of this temperature reduction was fairly consistent across all the sensors.

At the end of the first heat run with the cooler connected, the pump flow rate was increased to its maximum (estimated to be about 60 to 65 GPM) for one hour.No appreciable change was noted in the hotspot temperatures as a result of this, although there was a reduction of two degrees in the top-oil and average-oil rise temperatures. Had the test continued at this higher flow rate for a longer period, it is expected that the hot-spot temperature would have registered a similar decline.

The flow rate was then reduced to 20 GPM for a four-hour period. This resulted in an increase in the hot spot temperatures of approximately 4 C degrees.

0107_equipmentdesign_img4Study Two…
Heat runs were conducted on the GE transformer (that was being rewound by Ohio Transformer) at the OA and FA ratings and then at 150% of the FA rating, or 10.5 MVA, initially with the transformer filled with standard industry mineral oil and then repeated after draining the oil and re-filling with Luminol.While at the 10.5 MVA level and after the temperature stabilized, the oil cooler was connected and temperatures were recorded until they stabilized again. The cooler’s oil flow rate maintained for this study was 24 GPM.

The average hot-spot temperature recorded while loaded to the FA rating of 7 MVA, and without the cooler operational, was 92 C, using standard oil, and 87 C, using Luminol after stabilizing.When
temperature stabilization was reached after the cooler was operational, this temperature was reduced to 83 C, using standard oil, and 80 C, using Luminol. The magnitude of this temperature reduction was fairly consistent across all the sensors. The observed temperature differential between the cooler’s inlet and outlet varied between 8 and 14 C degrees, using standard oil, and between 11 and 18 C degrees, using Luminol.

The load was increased to the 10.5 MVA level, the oil cooler was connected, and temperatures were recorded until temperature stabilization was achieved. At this point, it was observed that the average hot-spot temperature of 140 C, in both cases, had been reduced to 127 C, using standard oil, and 115 C, using Luminol. The magnitude of this temperature reduction was fairly consistent across all the sensors. The observed temperature differential between the cooler’s inlet and outlet varied between 12 and 15 C degrees, using standard oil, and between 21 and 28 C degrees, using Luminol. (See Tables I & II and Figs. 2, 3, 4, 5, 6, 7.)

0107_equipmentdesign_img5

0107_equipmentdesign_img6

0107_equipmentdesign_img8

0107_equipmentdesign_img9

0107_equipmentdesign_img10

 

Conclusions
This study substantiates the benefit of employing an external oil cooler and the added benefit of using an ultra pure mineral oil (Luminol) in reducing a transformer’s hot spot temperature, thus preserving the life of the unit’s paper insulation. The relatively large internal oil quantities and large heatexchange surfaces of the transformers in this study result in relatively low internal oil and hot-spot temperatures.

Conversely, for a more modern unit with higher design temperatures, the expected temperature reduction with an external oil cooler could be even more impressive. However, the possibility of disrupted internal convection currents or diversion of oil from the transformers’ own radiators also would seem to be more likely because of the characteristically lower internal oil volumes. Consequently, a lower oil flow rate in the external cooler might be needed to avoid disrupting the transformer’s normal internal cooling pattern.

The transformer in Study One contained 1,920 gallons of oil, or 0.32 gallons per OA rated kVA, and the transformer in Study Two contained 1,300 gallons of oil, or 0.26 gallons per OA rated kVA. In a spot check of six transformers recently purchased by Consumers Energy, the lowest amount of oil found was 0.205 gallons per OA kVA rating. The SD Myers transformer maintenance guide reported in 1981 that some transformers had as little as 0.02 gallons per kVA.

In light of the significant variations in transformer oil volumes, flow to the external cooler may need to be tailored for the particular transformer involved. Besides possibly needing to modify the internal oil-cooling pattern, there also is a concern for creation of a vortex at the top hose connection. This would lead to air being sucked in and air bubbles being injected into the bottom of the transformer. A minimal oil level above the top hose connection must be maintained to avoid this or other possible measures must be adopted. MT


Noel Staszewski is a senior engineer in the Network Services Department of Consumers Energy.He has over 25 years of engineering experience in asset management and equipment maintenance in the utility industry, combined with additional experience in technology and product development, evaluation, reliability engineering and failure analysis of electronic components and systems in the automotive and computer industries. Telephone: (810) 760-3237; E-mail: nnstaszewski@cmsenergy.com

Mike Walker, a registered Professional Engineer in Michigan, spent 33 years in a number of engineering positions with Consumers prior to retiring in 2003. Since then, he has worked as an independent contractor for various companies. E-mail: mkwalker16@hotmail.com

Continue Reading →

227

6:00 am
January 1, 2007
Print Friendly

Leak Detection: The Science And The Art

Fluids are always looking for a way out of a system. Whenever they find one, you end up with a leak. Whether it’s major or minor in scope, it’s sure to be a drain on your efficiencies and profits.

leakdetection

There’s both science and art when it comes to leak detection in industry. It’s science because leak detection is an engineering issue that requires very sophisticated tools and systems. It’s an art because successful leak detection is a matter of training, experience and management emphasis.

One of the country’s leading experts in all of this is Alan Bandes of UE Systems, based in Elmsford, NY. In a recent “Tech Tips Newsletter,” he notes that a good leak detection program in any company or any plant should involve walkarounds. “If you don’t perform a walk-around prior to performing a survey, “there will be a lot of potential unexpected problems regarding accessibility, equipment used and route planning.Maintenance management should encourage inspectors to perform a walk-around for the sake of efficiency and effectiveness,” he says.

What Bandes and other experts are warning against is too much reliance on automation–and not enough on management programs and planned surveys by trained maintenance personnel. As Allan Rienstra, of SDT North America, in Cobourg, Ontario, puts it, “The foundation of any leak management program is training. Ultrasound leak inspection is simple science, but like anything there are tricks to the trade that need to be learned.”That’s why SDT and UE , as well as others in the business offer extensive training to their customers and prospects. “Other ways to keep up,” adds Rienstra, “include attending industry conferences and reviewing consumer-based web sites.” Bandes’ newsletter is available on the Internet, as is SDT’s monthly Ultrawave Technology Report.

Some tech trends
While training and management emphasis are crucial for a successful leak detection program, there are some clear technology developments that maintenance experts need to watch in coming years.

“The technology is moving toward enhancing existing products with specialized features to improve leak detection activities,” says UE’s Bandes. “Ultrasound is used predominantly in the mid- to grossranges of leak detection where leak rates range from 1 x 10-3 std cc/sec on up. To assist on the fringes of detection, new specialized probes have been produced such as UE’s Close Focus Module which enhances low-level emissions making leaks near the low-end threshold more detectable.”

What about leak detection in areas where accessibility is difficult?

“New flexible probes have been developed that can be bent and manipulated at odd angles,” Bandes explains. That includes leaks in distant spots, like pressurized cables in ceilings. “Parabolic microphones,” he notes, “are used to pinpoint these leaks at greater distances than with standard scanning modules.”

What about special situations that require permanent or fixed monitoring?

According to Bandes, the industry is supplying remote mountable transducers that can be set for alarming if leaks either occur or exceed set threshold levels. Some of these specialized remote sensors are configured to detect leaks in valves with a 4-20 mA or 0-10V DC.Heterodyned output can be configured to send information to a control panel where the information can be viewed or recorded,” adds the UE executive.

Other companies in the business such as Monarch Instrument, of Amherst, NJ, SPM, in Marlborough, CT, and Whisper Ultrasonic Leak Detector, of East Syracuse, NY, also offer products for leak detection programs–and are constantly developing new ones for ever-more accurate and sensitive devices for leak detection.

Greenhouse gas quotas
SDT’s Rienstra notes other trends.”There’s a changed point of view in manufacturing regarding compressed air leak detection,” he says. “Compressed air leak management was predominantly done for energy efficiency because of the high cost of energy required to compress air. Average systems have between 30 and 35% leakage, if there is no program in place.A leak management program targets leak rates under 10%.”

As Rienstra noted in his article in the December 2006 UTILITIES MANAGER supplement to MAINTENANCE TECHNOLOGY, manufacturers are still after those energy savings (the challenge), but there is also a win because less energy consumption means fewer greenhouse gas emissions. In some countries companies have a greenhouse gas emission quota. If they are able to operate under that quota, they can save on emissions and even sell their leftover quota to others (the opportunity).

Agreeing with Bandes, Rienstra notes that there are two aspects here for maintenance management to consider: training and “the gadgets” (the art and the science). “We are all gadget-driven, he says. “Flexible wand sensors, parabolic dishes with laser pointers and extended distance sensors help make the leak inspector more efficient and provide him with extra levels of safety.”

Rienstra adds that leak calculators reflect another growing technical trend. His company will be releasing one this year that allows users to plug in the decibel level of a found leak. The calculator will then process all the data required to assign a dollar value to that leak.

Systematic approach and training
Of course, not all leaks are the same in terms of detection and control. Is it a specialized gas, compressed air, steam? What type of system or systems are to be monitored?

“What are the acceptable leak rates?” asks Bandes. “The first thing to do is to establish a baseline. Know what is going on with the system right now,” he advises. “Is the system performing as required? Companies should set a workable goal. For example, if compressed air leaks are the issue, review the use of compressed air; are there alternative technologies that can replace the use of air in some areas? Who will perform the leak survey? Above all,” he cautions,”these inspectors should have training, so training should be on the check-off list.”

Consider, too, the cost of a typical leak and how many you project in your plant: 10, 100, 1000? Walk through the system with a diagram or create a map of the system during the walk-through process. Ask what type of equipment will be needed: sophisticated or basic ultrasonic instruments? “The answers,” Bandes explains, “will be determined by the complexity of the system.”

A method of recording and reporting leak survey results, including costavoidance figures, should also be created. In addition, there should be a method of follow-up to assure the leaks are repaired properly. “Routes should be created that are manageable. Leak detection does not stop at the survey,” warns Bandes. “It should be routinely incorporated into maintenance planning.”

Educating employees can be a particularly cost-effective way to cut down on leaks. Explain to them the importance of your leak detection program and why they should report leaks when they notice them. Explain that the misuse (of air) can be very costly, and train them in the proper use of it.

Don’t feel as though you have to reinvent the wheel, either.When it comes to educating personnel on leak detection, you’ll find that there are numerous resources available through manufacturers of machinery, ultrasonic equipment suppliers and consultants. The U.S. Department of Energy also has information on its Web site for download.

Biggest leak detection mistakes
While leak detection seems a simple enough task, there are pitfalls. “The biggest mistake I see is venturing into a leak detection program without any strategy or written goals,” warns Rienstra. “Without team leaders,” he continues, “without training, without a guideline for how they will present their successes to upper management, any leak detection program is doomed to failure.”

According to Rienstra, as far as techniques go, far too often an inspector does his/her job and leaks are found and tagged, but there is no strategy in place to make sure things get fixed. If the goal is energy savings and greenhouse gas reduction, then the leak has to be fixed to save. “A found leak never saved a penny,” he says.

Bandes of UE adds, “The most common mistakes are lack of planning, lack of communication and insufficient training. Any program, whether it is leak detection or predictive maintenance, requires the support of management.” Don’t just start a program without planning it thoroughly. Bandes suggests the that you heed the following checklist:

  • Communicate with management and those who will be part of the program.
  • Explain the program, the methods and the goals.
  • Think through strategies of detection and route creation, reporting and recording results.
  • Have some plan for follow-up on repairs and carefully choose the instruments to be used in relation to the type of system to be inspected.

Remember that without the training of inspection personnel, your whole program can fail. To be successful, personnel need to know the effective methods for locating leaks, as well as how to work with competing ultrasounds in loud environments.

The science and the art
The science of leak detection gets more and more accurate and sophisticated every year. “Manufacturers are always looking for ways to increase the threshold of sensitivity (find smaller and smaller leaks). Probably the most important development aside from that would be software that maps out the inspection process and allows for accountability from the inspector to the repair,” says Rienstra. In other words, more and more automation is on the horizon for leak detection.

And, he adds, all leak detection is basically “dollar driven.”He notes, for exampler, that energy in California costs close to five times what it costs in other parts of the country. “You think compressed air leaks aren’t issues in that competitive state?”

The art of leak detection, however, is best summed up in the need for training and management emphasis and involvement. Bandes reminds us how vital it is to communicate with management. Leak detection and control have always been important engineering and production issues. These days, though, it is also too costly an issue (and an increasingly significant social issue as well).Any program to stop leaks is now too important to try to implement without management involvement, strategy, planning and (one more time) TRAINING.

No leak detection program will ever be perfect, but you can get closer and closer to perfect by concentrating on both the science and the art of it.

George Weimer is a professional writer based in Cleveland OH.

Continue Reading →

Navigation