Archive | January


8:09 pm
April 29, 2009
Print Friendly

Going Wireless: Wireless Technology Is Ready For Industrial Use

Wireless works in a plant, but you’ll want to be careful regarding which “flavor” you choose

Wireless Technology now provides secure, reliable communication for remote field sites and applications where wires cannot be run for practical or economic reasons. For maintenance purposes, wireless can be used to acquire condition monitoring data from pumps and machines, effluent data from remote monitoring stations, or process data from an I/O system.

For example, a wireless system monitors a weather station and the flow of effluent leaving a chemical plant. The plant’s weather station is 1.5 miles from the main control room. It has a data logger that reads inputs from an anemometer to measure wind speed and direction, a temperature gauge and a humidity gauge. The data logger connects to a wireless remote radio frequency (RF) transmitter module, which broadcasts a 900MHz, frequency hopping spread spectrum (FHSS) signal via a YAGI directional antenna installed at the top of a tall boom located beside the weather station building. This posed no problem.

However, the effluent monitoring station was thought to be impossible to connect via wireless. Although the distance from this monitoring station to the control room is only one-quarter mile, the RF signal had to pass through a four-story boiler building. Nevertheless, the application was tested before installation, and it worked perfectly. The lesson here is that wireless works in places where you might think it can’t. All you have to do is test it.

There are many flavors of wireless, and an understanding is needed to determine the best solution for any particular application.Wireless can be licensed or unlicensed, Ethernet or serial interface, narrow band or spread spectrum, secure or open protocol,Wi-fi…the list goes on. This article provides an introduction to this powerful technology.

The radio spectrum
The range of approximately 9 kilohertz (kHz) to gigahertz (GHz) can be used to broadcast wireless communications. Frequencies higher than these are part of the infrared spectrum, light spectrum, X-rays, etc. Since the RF spectrum is a limited resource used by television, radio, cellular telephones and other wireless devices, the spectrum is allocated by government agencies that regulate what portion of the spectrum may be used for specific types of communication or broadcast.

In the United States, the Federal Communications Commission (FCC) governs the allocation of frequencies to non-government users. FCC has limited the use of Industrial, Scientific, and Medical (ISM) equipment to operate in the 902-928MHz, 2400-2483.5MHz and 5725-5875MHz bands,with limitations on signal strength, power, and other radio transmission parameters. These bands are known as unlicensed bands, and can be used freely within FCC guidelines. Other bands in the spectrum can be used with the grant of a license from the FCC. (Editor’s Note: For a quick definition of the various bands in the RF spectrum, as well as their uses, log on to: http://encyclopedia.thefreedictionary. com/radio+frequency )

Licensed or unlicensed
A license granted by the FCC is needed to operate in a licensed frequency. Ideally, these frequencies are interference-free, and legal recourse is available if there is interference. The drawbacks are a complicated and lengthy procedure in obtaining a license, not having the ability to purchase off-the-shelf radios since they must be manufactured per the licensed frequency, and, of course, the costs of obtaining and maintaining the license.


License-free implies the use of one of the frequencies the FCC has set aside for open use without needing to register or authorize them. Based on where the system will be located, there are limitations on the maximum transmission power. For example, in the U.S., in the 900MHz band, the maximum power may be 1 Watt or 4 Watts EIRP (Effective Isotropic Radiated Power).

The advantages of using unlicensed frequencies are clear: no cost, time or hassle in obtaining licenses; many manufacturers and suppliers who serve this market; and lower startup costs, because a license is not needed. The drawback lies in the idea that since these are unlicensed bands, they can be “crowded” and, therefore, may lead to interference and loss of transmission. That‘s where spread spectrum comes in. Spread spectrum radios deal with interference very effectively and perform well, even in the presence of RF noise.

Spread spectrum systems
Spread Spectrum is a method of spreading the RF signal across a wide band of frequencies at low power, versus concentrating the power in a single frequency as is done in narrowband channel transmission. Narrowband refers to a signal which occupies only a small section of the RF spectrum, whereas wideband or broadband signal occupies a larger section of the RF spectrum. The two most common forms of spread spectrum radio are frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS). Most unlicensed radios on the market are spread spectrum.

As the name implies, frequency hopping changes the frequency of the transmission at regular intervals of time. The advantage of frequency hopping is obvious: since the transmitter changes the frequency at which it is broadcasting the message so often, only a receiver programmed with the same algorithm would be able to listen and follow the message. The receiver must be set to the same pseudo-random hopping pattern, and listen for the sender’s message at precisely the correct time at the correct frequency. Fig. 1 shows how the frequency of the signal changes with time. Each frequency hop is equal in power and dwell time (the length of time to stay on one channel). Fig. 2 shows a two dimensional representation of frequency hopping, showing that the frequency of the radio changes for each period of time. The hop pattern is based on a pseudo random sequence.


DSSS combines the data signal with a higher data-rate bit-sequence-also known as a ‘chipping code’-thereby “spreading” the signal over greater bandwidth. In other words, the signal is multiplied by a noise signal generated through a pseudo-random sequence of 1 and -1 bits. The receiver then multiplies the signal by the same noise to arrive at the original message (since 1 x 1 = 1 and -1 x -1 = 1).

When the signal is “spread,” the transmission power of the original narrowband signal is distributed over the wider bandwidth, thereby decreasing the power at any one particular frequency (also referred to as low power density). Fig. 3 shows the signal over a narrow part of the RF spectrum. In Fig. 4, that signal has been spread over a larger part of the spectrum, keeping the overall energy the same, but decreasing the energy per frequency. Since spreading the signal reduces the power in any one part of the spectrum, the signal can appear as noise. The receiver must recognize this signal and demodulate it to arrive at the original signal without the added chipping code. FHSS and DSSS both have their place in industry and can both be the “better” technology based on the application. Rather than debating which is better, it is more important to understand the differences, and then select the best fit for the application. In general, a decision involves:

  • Throughput
  • Colocation
  • Interference
  • Distance
  • Security

Throughput is the average amount of data communicated in the system every second. This is probably the first decision factor in most cases. DSSS has a much higher throughput than FHSS because of a much more efficient use of its bandwidth and employing a much larger section of the bandwidth for each transmission. In most industrial remote I/O applications, the throughput of FHSS is not a problem.

As the size of the network changes or the data rate increases, this may become a greater consideration. Most FHSS radios offer a throughput of 50-115 kbps for Ethernet radios.Most DSSS radios offer a throughput of 1-10 Mbps. Although DSSS radios have a higher throughput than FHSS radios, one would be hard pressed to find any DSSS radios that serve the security and distance needs of the industrial process control and SCADA market. Unlike FHSS radios, which operate over 26MHz of the spectrum in the 900MHz band (902-928MHz), and DSSS radios, which operate over 22MHz of the 2.4GHz band, licensed narrow band radios are limited to 12.5kHz of the spectrum.Naturally, as the width of the spectrum is limited, the bandwidth and throughput will be limited as well.Most licensed frequency narrowband radios offer a throughput of 6400 to 19200 bps.

Collocation refers to having multiple independent RF systems located in the same vicinity. DSSS does not allow for a high number of radio networks to operate in close proximity as they are spreading the signal across the same range of frequencies. For example, within the 2.4GHz ISM band, DSSS allows only three collocated channels. Each DSSS transmission is spread over 22MHz of the spectrum, which allows only three sets of radios to operate without overlapping frequencies.

FHSS, on the other hand, allows for multiple networks to use the same band because of different hopping patterns. Hopping patterns which use different frequencies at different times over the same bandwidth are called orthogonal patterns. FHSS uses orthogonal hopping routines to have multiple radio networks in the same vicinity without causing interference with each other. That is a huge plus when designing large networks, and needing to separate one communication network from another. Many lab studies show that up to 15 FHSS networks may be collocated, whereas only 3 DSSS networks may be collocated. Narrowband radios obviously cannot be collocated as they operate on the same 12.5MHz of the spectrum.

Interference is RF noise in the vicinity and in the same part of the RF spectrum. A combining of the two signals can generate a new RF wave or can cause losses or cancellation in the intended signal. Spread Spectrum in general is known to tolerate interference very well, although there is a difference in how the different flavors handle it.When a DSSS goingwireless4receiver finds narrowband signal interference, it multiplies the received signal by the chipping code to retrieve the original message. This causes the original signal to appear as a strong narrow band; the interference gets spread as a low power wideband signal and appears as noise, and thus can be ignored.

In essence, the very thing that makes DSSS radios spread the signal to below the noise floor is the same thing that allows DSSS radios to ignore narrowband interference when demodulating a signal. Therefore, DSSS is known to tolerate interference very well, but it is prone to fail when the interference is at a higher total transmission power, and the demodulation effect does not drop the interfering signal below the power level of the original signal.

Given that FHSS operates over 83.5MHz of the spectrum in the 2.4GHz band, producing high power signals at particular frequencies (equivalent to having many short synchronized bursts of narrowband signal) it will avoid interference as long as it is not on the same frequency as the narrowband interferer.Narrowband interference will, at most, block a few hops which the system can compensate for by moving the message to a different frequency. Also, the FCC rules require a minimum separation of frequency in consecutive hops, and therefore the chance of a narrowband signal interfering in consecutive hops is minimized.

When it comes to wideband interference, DSSS is not so robust. Since DSSS spreads its signal out over 22MHz of the spectrum all at once at a much lower power, if that 22MHz of the spectrum is blocked by noise or a higher power signal, it can block 100% of the DSSS transmission, although it will only block 25% of the FHSS transmission. In this scenario, FHSS will lose some efficiency, but not be a total loss.

In licensed radios the bandwidth is narrow, so a slight interference in the range can completely jam transmission. In this case, highly directional antennas and band pass filters may be used to allow for uninterrupted communication, or legal action may be pursued against the interferer.

802.11 radios are more prone to interference since there are so many readily available devices in this band. Ever notice how your microwave interferes with your cordless phone at home? They both operate in the 2.4GHz range, the same as the rest of 802.11 devices. Security becomes a greater concern with these radios.

If the intended receiver of a transmitter is located closer to other transmitters and farther from its own partner, it is known as a Near/Far problem. The nearby transmitters can potentially drown the receiver in foreign signals with high power levels. Most DSSS systems would fail completely in this scenario. The same scenario in a FHSS system would cause some hops to be blocked but would maintain the integrity of the system. In a licensed radio system, it would depend on the frequency of the foreign signals. If they were on the same or close frequency, it would drown the intended signal, but there would be recourse for action against the offender unless they have a license as well.

Distance is closely related to link connectivity, or the strength of an RF link between a transmitter and a receiver, and at what distance they can maintain a robust link. Given that the power level is the same, and the modulation technique is the same, a 900MHz radio will have higher link connectivity than a 2.4GHz radio. As the frequency in the RF spectrum increases, the transmission distance decreases if all other factors remain the same. The ability to penetrate walls and object also decreases as the frequency increases.Higher frequencies in the spectrum tend to display reflective properties. For example, a 2.4GHz RF wave can bounce off reflective walls of buildings and tunnels. Based on the application, this can be used as an advantage to take the signal farther, or it may be a disadvantage causing multipath, or no path, because the signal is bouncing back.

FCC limits the output power on spread spectrum radios. DSSS consistently transmits at a low power, as discussed above, and stays within the FCC regulation by doing so. This limits the distance of transmission for DSSS radios, and thus this may be a limitation for many of the industrial applications. FHSS radios, on the other hand, transmit at high power on particular frequencies within the hopping sequence, but the average power on the spectrum is low, and therefore can meet with the regulations. Since the actual signal is transmitting at a much higher power than the DSSS, it can travel further.Most FHSS radios are capable of transmitting over 15 miles, and longer distances with higher gain antennas.

802.11 radios, although available in both DSSS as well as FHSS, have a high bandwidth and data rate, up to 54Mbps (at the time of this publication). But it is important to note that this throughput is for very short distances, and downgrades very quickly as the distance between the radio modems increases. For example, a distance of 300 feet would drop the 54Mbps rate down to 2Mbps. This makes this radio ideal for a small office or home application, but not for many industrial applications where there is a need to transmit data over several miles.

Since narrowband radios tend to be a lower frequency, they are a good choice in applications where FHSS radios cannot provide adequate distance. A proper application for narrow band licensed radios is when there is a need to use a lower frequency to either travel over a greater distance, or be able to follow the curvature of the earth more closely and provide link connectivity in areas where line of sight is hard to achieve.

Since DSSS signals run at such low power, the signals are difficult to detect by intruders. One strong feature of DSSS is its ability to decrease the energy in the signal by spreading the energy of the original narrowband signal over a larger bandwidth, thereby decreasing the power spectral density. In essence, this can bring the signal level below the noise floor, thereby making the signal “invisible” to would-be intruders. On the same note, however, if the chipping code is known or is very short, then it is much easier to detect the DSSS transmission and retrieve the signal since it has a limited number of carrier frequencies. Many DSSS systems offer encryption as a security feature, although this increases the cost of the system and lowers the performance, because of the processing power and transmission overhead for encoding the message.

For an intruder to successfully tune into a FHSS system, he needs to know the frequencies used, the hopping sequence, the dwell time and any included encryption. Given that for the 2.4GHz band the maximum dwell time is 400ms over 75 channels, it is almost impossible to detect and follow a FHSS signal if the receiver is not configured with the same hopping sequence, etc. In addition, most FHSS systems today come with high security features such as dynamic key encryption and CRC error bit checking.

Today,Wireless Local Area Networks (WLAN) are becoming increasingly popular. Many of these networks use the 802.11 standard, an open protocol developed by IEEE.Wi-fiis a standard logo used by the Wireless Ethernet Compatibility Alliance (WECA) to certify 802.11 products. Although industrial FHSS radios tend to not be Wi-fi, and therefore not compatible with these WLANs, there may be a good chance for interference due to them operating in the same bandwidth. Since most Wi-fiproducts operate in the 2.4 or 5GHz bands, it may be a good idea to stick with a 900MHz radio in industrial applications, if the governing body allows this range (Europe allows only 2.4GHz, not 900MHz). This will also provide an added security measure against RF sniffers (a tool used by hackers) in the more popular 2.4 band.

Security is one of the top issues discussed in the wireless technology sector. Recent articles about “drive-by hackers” have left present and potential consumers of wireless technology wary of possible infiltrations. Consumers must understand that 802.11 standards are open standards and can be easier to hack than many of the industrial proprietary radio systems.

The confusion about security stems from a lack of understanding of the different types of wireless technology. Today, Wi-fi(802.11a, b, and g) seems to be the technology of choice for many applications in the IT world, homes and small offices. 802.11 is an open standard in which many vendors, customers and hackers have access to the standard.While many of these systems have the ability to use encryption like AES and WEP, many users forget or neglect to enable these safeguards which would make their systems more secure.Moreover, features like MAC filtering can also be used to prevent unauthorized access by intruders on the network. Nonetheless, many industrial end users are very wary about sending industrial control information over standards that are totally “open.”

So, how do users of wireless technology protect themselves from infiltrators? One almost certain way is to use non- 802.11 devices that employ proprietary protocols that protect networks from intruders. Frequency hopping spread spectrum radios have an inherent security feature built into them. First, only the radios on the network that are programmed with the “hop pattern” algorithm can see the data. Second, the proprietary, non-standard, encryption method of the closed radio system will further prevent any intruder from being able to decipher that data.

The idea that a licensed frequency network is more secure may be misleading. As long as the frequency is known, anyone can dial into the frequency, and as long as they can hack into the password and encryption, they are in. The added security benefits that were available in spread spectrum are gone since licensed frequencies operate in narrowband. Frequency hopping spread spectrum is by far the safest, most secure form of wireless technology available today.

Mesh radio networks
Mesh radio is based on the concept of every radio in a network having peer-topeer capability. Mesh networking is becoming popular since its communication path has the ability to be quite dynamic. Like the worldwide Web, mesh nodes make and monitor multiple paths to the same destination to ensure that there is always a backup communication path for the data packets.

There are many concerns that developers of mesh technology are still trying to address, such as latency and throughput. The concept of mesh is not new. The internet and phone service are excellent mesh networks based in a wired world. Each node can initiate communication with another node and exchange information.

In conclusion, the choice of radio technology to use should be based on the needs of the application. For most industrial process control applications, proprietary protocol license-free frequency hopping spread spectrum radios (Fig. 5) are the best choice because of lower cost and higher security capabilities in comparison to licensed radios.When distances are too great for a strong link between FHSS radios with repeaters, then licensed narrowband radios should be considered for better link connectivity. The cost of licensing may offset the cost of installing extra repeaters in a FHSS system.

As more more industrial applications require greater throughput, networks employing DSSS that enable TCP/IP and other open Ethernet packets to pass at higher data rates will be implemented. This is a very good solution where PLCs (Programmable Logic Controllers), DCS (Distributed Control Systems) and PCS (Process Control Systems) need to share large amounts of data with one another or upper level systems like MES (Manufacturing Execution Systems) and ERP (Enterprise Resource Planning) systems.

When considering a wireless installation, check with a company offering site surveys that allow you to install radios at remote locations to test connectivity and throughput capability. Often this is the only way to ensure that the proposed network architecture will satisfy your application requirements. These demo radios also let you look at the noise floor of the plant area, signal strength, packet success rate and the ability to identify if there are any segments of the license free bandwidth that are currently too crowded for effective communication throughput. If this is the case, then hop patterns can be programmed that jump around that noisy area instead of through it. MT

Gary Mathur is an applications engineer with Moore Industries-International, in North Hills, CA. He holds Bachelor’s and Masters degrees in Electronics Engineering from Agra University, and worked for 12 years with Emerson Process Management before joining Moore. For more information on the products referenced in this article, telephone: (818) 894-7111; e-mail:

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

The Most Productive Nation


Bob Williamson, Contributing Editor

What should we wish for in 2007? Cutting operating costs has been at the top of the business and industry wishlist for over 30 years…

Sometimes the cost-cutting bell gets rung louder than others. It all depends, some say, on Wall Street investors, stockholders, executive decisions, the marketplace, competition, return on investment, global economic changes and/or currency exchange rates. Then, in prosperous times, the cost-cutting bell is silenced. Should we wish for more of the same?

The United States remains the most productive nation in the world, and U.S. manufacturing has remained the most productive in the world since before 1960! Despite what the media says, despite politicians’ interpretations, despite what some may think, we are a model of economic stamina, whether measured by Real GDP (Gross Domestic Product) per capita, or Real GDP per employed person. The top 10 Real GDP per capita in 2005: U.S., Norway, Denmark, Netherlands, Canada, Austria, U.K., Belgium, Sweden, Australia. Manufacturing, not service industries, is one of the sources of ‘original wealth” (along with mining and agriculture). Should we wish to remain the most productive nation in the world? If so, we have serious work to do…and we already know how to do it!

Good news continues to be reflected in this year’s productivity trends: U.S. manufacturing Unit Labor Costs (ULC) fell 8.3% in the second quarter and 4.1% in the third quarter of 2006 (ULC = average labor compensation per unit of output). Productivity improvement measures, including advanced manufacturing methods, workplace innovation, favorable currency exchange rates, and (I believe) our maintenance and reliability improvements continue to sustain America’s competitive edge.

Low-wage countries continue to attract the attention of some manufacturers. However, these countries (China, India, Mexico, Turkey, Czech Republic, Hungary and Poland) also have extremely low productivity levels. This is where the Unit Labor Cost comes in-a true measure of economic productivity. For example, wages are considerably lower in China and India (only 2% to 3% of U.S. wages). But productivity is also significantly lower in China and India (12% to 13% of U.S. productivity). That means considerably MORE labor hours are required to make the same output in China and India than in the U.S. Still, China’s and India’s Unit Labor Costs are lower than those of the U.S.-but only 20% lower, on average. And, 20% isn’t that much when you calculate the true ‘costs” of importing goods from Asia. These include actual transportation, in-transit damage, un-returnable defective products, long lead times for changes and order quantities and high inventory levels that have to be maintained here, not to mention the risk of dealing with a country (China) that doesn’t recognize proprietary information, patents, trademarks or copyright protections.

China and India, among others, will continue to be formidable consumers and competitors in the global market. Twenty-eight percent (28%) of all of the world’s jobs are in China and 15% are in India. As their standards of living increase, so will their cost of living and their employee compensation. In China, for example, average hourly compensation in manufacturing jobs rose 8.8% from 2002 to 2003, and another 8.1% from 2003 to 2004. To retain their lower ULC, China and India must employ increasingly more advanced manufacturing technologies, methods and innovations along with their economic and environmental reform policies.Advanced manufacturing requires increasing levels of skilled and highly-skilled workers and technicians, which also brings higher compensation levels.As noted in previous columns and articles, developing and attracting higher-skilled workers will continue to be an escalating worldwide problem.

Our challenge for 2007 and beyond is to keep our productivity levels high and our operating costs down as we enter a 19-year era of drastic workforce demographic changes. We must dramatically improve the education levels of our workforce to facilitate error-free operations plus accelerate our ability to rapidly innovate and improve our infrastructure, facilities, manufacturing, transportation and utilities. Our business and government leaders, schools and families all play a role in retaining, and improving our competitive advantage. Look what’s happened over the past 30 years: Vocational/technical school programs have declined, as have skilled trades apprenticeship programs. Many manufacturing and maintenance jobs have lost their luster, despite relatively high wages. Changes in taxes, insurance, health care, permits and liability litigation have increased costs. The cost of procuring and transporting raw materials and finished goods has skyrocketed. Outsourcing and off-shoring, once thought to be “the answers” to our industrial woes, may not always be the best path to a long-term, viable economy. These strategies often just turn out to be “quick fixes” with long-term consequences.

My wish for 2007? Let’s all do our part to improve our Nation’s success by building a solid foundation based on an educated, motivated, innovative workforce. Let’s make our critical equipment, infrastructure and facilities the most reliable and best-maintained and our standard of living and productivity the highest in the world. Here’s wishing all of our faithful readers a very happy and prosperous New Year!

AUTHOR’S NOTE: The facts and statistics for this article were obtained from The Conference Board Report (October 2006); The Conference Board via Newswire (June 01, 2004); USDOL, Bureau of Labor Statistics News (Nov. 30, 2006 & Dec. 5, 2006); and the USDOL, BLS, Office of Productivity & Technology report: “Comparative Real GDP Per Capita and Per Person Fifteen Countries 1960-2005.”

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Asset Intelligence Goes Beyond Basic Condition Monitoring

With new and increasingly more powerful on-line equipment diagnostic tools becoming available every year, process manufacturing industries now have the opportunity to integrate this criticalequipment condition information into their asset management strategies. These strategies can support more business- driven approaches aimed at improving overall financial performance. Much work still needs to be done, however.

Until now (in process manufacturing operations at least…), the focus has been on relatively limited and specific diagnostic monitoring of intelligent field devices and large rotating equipment. This is due largely to the widespread availability of highly capable, fieldbus-enabled condition monitoring tools, such as vibration, temperature and pressure monitoring and fluid analysis, all of which can be integrated into the control system strategy to react to critical changes in the readings.

But, within an overall asset management strategy, it’s important that real-time condition monitoring practices go beyond intelligent field devices and large rotating equipment to encompass all plant production assets. These should include all sensors and actuators (regardless of the vendor); rotating and non-rotating equipment, such as pumps, motors, compressors, turbines, mixers, dryers and heat exchangers; even entire process units.

The real goal is to move to predictive and proactive decision-making based on developing trends versus our current reactionary approach. This means that large (and often overwhelming) amounts of real-time diagnostic data now available must be collected, aggregated and analyzed, then put into proper context and made available to other plant and enterprise systems. In addition, we need to manage and control the resulting actions to manage risk and support our continuous improvement efforts, bringing together Maintenance, Operations and Engineering. By pulling these three aspects together—collection, analysis and action—we move from condition monitoring to “condition management” based on real-time asset intelligence.

The key lies in developing a knowledge management capability that captures the expertise of today’s highly experienced operators, engineers and maintenance technicians. While this capability is important today, it will become even more critical in the future as our industrial plants struggle to maintain current levels of asset utilization and availability with an ever-shrinking pool of skilled and knowledgeable personnel due to an aging workforce and retirement of many of our most experienced people.

By combining this knowledge with an integrated view of the entire operation from both the business and operations perspectives, we can move to an environment where more informed decisions can be made in a more timely fashion. From this base,we will be well-positioned to manage the risks inherent in the process industries (i.e., health and safety, regulatory, financial and environmental) while delivering improved business performance and shareholder value. MT

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

The Maintenance/Production Partnership: Part II


Ken Bannister, Contributing Editor

Role definition is crucial if both Maintenance and Production departments are to strike an accord and work in an autonomous, yet cohesive manner to deliver a high-quality product in a waste-free, cost-effective manner. Virtually every major management philosophy and methodology in practice today recognizes and fosters the integral relationship between the Maintenance and Production departments. Zero inventories-based Just In Time (JIT) and Lean-manufacturing methods would not be possible without high levels of equipment reliability and availability, driven by active operator involvement in the maintenance process.

Autonomous operator-based maintenance is foundational to the Total Productive Maintenance (TPM) philosophy, and is a cornerstone of the Reliability Centered Maintenance (RCM) methodology, both of which heavily utilize operator input to design, implement and continuously improve equipment maintenance reliability strategies. Increasing reliability and throughput requires Maintenance and Production to work together on a two-pronged management and hourly workforce level.

Operator-based maintenance
Operator-based maintenance can be implemented through the following three-step approach designed to promote confidence in both parties:

Step 1: Commence with a revised work acceptance procedure.Whenever Production calls in a machine problem, guide the caller(s) to disclose their name, the machine #/description, location, area of the problem (component or system) and a primary sense STILL (Smell, Touch, Intuition, Look, Listen) analysis of what the problem is believed to be.Operators instinctively know when their equipment is not running in the “sweet spot,” but they are rarely asked for their opinion(s). This step simplifies and speeds up the pre-planning process and allows the scheduler to more accurately dispatch the correct resources the first time.

Step 2: Allow and encourage operators to be part of the testing, start-up and acceptance after repair completion.

Step 3: Introduce Reliability Centered Maintenance (RCM). Choose a suitable RCM pilot and always include the relevant equipment operator and supervisor as part of the RCM analysis team when performing the FMEA analysis and condition-based maintenance work tasks. Use a perimeter-based maintenance approach in which the equipment is set up for rudimentary preventive and condition

monitoring checks while running. These checks can include temperature, flow, throughput, fill level, pressure and filter cleanliness-set up in an interactive “Go/No Go” style that lends itself perfectly to a regular operator check. This type of “Go/No Go” check only requires paperwork in the form of a work request when a “No Go” state is in effect.

Take, for example, a pre-RCM PM work order that might have instructed a maintainer to check and record all gauge pressures. This would not just be a waste of maintenance resources-the maintainer also would have to know the upper and lower safe operating window (SOW) limit for every gauge if a situation were to be immediately averted.

Recording every good pressure in the CMMS history also is meaningless and a waste of resources when it comes to input of the data. Marking each gauge with the SOW allows any person viewing the instrument to tell if the needle is in the safe or “Go” position between the lines, in which case no further action is required or taken. If, however, the needle is outside the SOW mark lines, or in a “No Go” state, the operator contacts the supervisor who immediately raises a work request for Maintenance to attend the pending situation. Because of the RCM FMEA analysis, Maintenance knows right away what the problem root cause could be and activates a planned work order in response to the event condition

RCM, which advocates autonomous maintenance work by operators (Total Productive Maintenance – TPM), is a perfect catalyst in building and cementing autonomous operator maintenance as a first-level maintenance approach, wherein the operator becomes the true machine guardian on a daily basis. Once a comfortable maintainer/operator working relationship is established, more complex PM-styled tasks, such as lubrication and filter changeouts, can be engineered into the operator-based maintenance program. In Fig. 1, operator-based maintenance is shown dovetailing into the core element of the maintenance process.


Maintenance/production management alignment
Aligning the Maintenance and Production management teams to work in partnership is achieved through communication and an understanding of each other’s goals and objectives. In the process, the parties work collaboratively in the planning and scheduling of the production equipment uptime and downtime activities.

As both departments own the equipment in different ways, both compete for “alone” time with the equipment. Unfortunately, if both agendas are not harmonized, the equipment will suffer and both departments will lose.

The interactive input/output information required of both departments in order to prepare and schedule weekly forecasts and daily work schedules effectively is depicted in Fig. 2. In both cases, monthly and weekly schedule forecasts are being built on an ongoing basis, and being used as “best guesstimates” for assessing and managing resource requirements. From these forecasts come the daily schedules that are usually 70% to 95% accurate–and which should be just flexible enough to allow for minor unforeseen changes. To synchronize these daily schedules, both Maintenance and Production must agree, through the RCM process, what point in an asset’s condition dictates an uncontested responsive event in which both the Maintenance and Production planning and scheduling departments will work together in the asset’s interest alone.


The Maintenance department can further assist the Production staff by providing a series of documents that include: a daily equipment condition report spelling out any triggered alarm conditions and found “No Go” exceptions that require planning and scheduling; a status report of unfinished or “carryover” work from a previous day or shift; a report-driven form with the fault codes marked on the work orders to show the percentage of non-maintenance-caused equipment failures (i.e., operator error, loading errors or jamming, overloading, etc.); and an equipment availability report. The Production department can further assist the Maintenance staff through the provision of a report detailing any pending product changeover or retooling event from which Maintenance can take the forced downtime opportunity to plan and schedule backlog or pending work on that equipment. Production will also assist Maintenance by providing reports on raw material problems, equipment incidents and any work requests. Getting together on a daily basis allows the information transfer and the setting of an almost fixed daily schedule. The product of this is equipment reliability and availability that translates directly into sustainable throughput and quality!

Ken Bannister is lead partner & principal consultant for Engtech Industries, Inc. Telephone: (519) 469-9173; e-mail:


Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Reducing Hot-Spot Temperatures in Transformers

In this real-world study from the power gen sector, researchers tested external oil coolers and ultra pure mineral oil to determine their effectiveness on hot spots, and, ultimately, equipment reliability

Over the past several years, Consumers Energy (“Consumers”) has come to rely strongly on external oil coolers to delay scheduled transformer capacity increases, or to cool transformers that experience marginal high top-oil temperatures. A transformer experiencing a top-oil temperature of 90 to 100 C or more would be a likely candidate for such an installation. These types of external coolers are installed in close proximity to the transformer using flexible hoses that are typically connected to existing 11/2″ taps near the top and bottom of the transformer.

Now that Consumers has acquired more than 20 oil coolers, questions frequently are being asked regarding the effectiveness of these units in actually limiting the loss of insulation life. Although the cooler reduces the oil temperature, there is a concern that it may be disrupting the natural convective oil flow inside the transformer and the hot-spot cooling effect may not be as great as expected or indicated by the top oil temperature.

Under normal conditions, the temperature gradient between the top and bottom of a transformer produces an internal oil circulation that acts to remove heat from the coils through convection. An external cooler can diminish this normal temperature gradient, resulting in reduced convective currents and, in theory, create pockets of stagnant oil and induce local overheating. To avoid this situation, some utilities have reportedly removed OEM-installed oil pumps from transformers where there has been no internally directed oil flow.

Equipment description
Study One…
transformerThe transformer selected for Study One was a unit being rewound for Consumers by Siemens Westinghouse of Hamilton, Ontario. This 5/6.25 MVA circular-core unit was originally manufactured by Allis Chalmers in 1952. Design changes by Siemens Westinghouse increased the OA rating to 6 MVA and the FA rating to 7.5 MVA. Six Luxtron fiber optic sensors were implanted near the top of the transformer’s secondary coils—two in each winding with one located between the first and second disk and one between the second and third disk. The sensors were installed as near to the mid-point of the disks as feasible and in contact with the copper conductor. These locations are thought to closely represent the transformer’s hot-spot location. All other temperatures recorded in this study were taken from standard thermocouples.

A 50 kW external oil cooler was obtained from Unifin of London, Ontario. This cooling unit consists of a 1 HP Cardinal pump, two 4.0 HP fans and a heat exchanger. The pump used by Unifin is designed for a variety of applications,with the desired oil flow for a given application achieved by throttling the flow with a valve on the discharge side of the pump.Nominally, this combination of components is rated by Unifin for a flow rate of 20 GPM, but the pump can produce a much higher flow, as was observed in this study.

Study Two…
The transformer selected for Study Two was a unit being rewound for Consumers by Ohio Transformer of Tallmadge, Ohio. This 5 MVA base circular-core transformer was originally manufactured by GE in 1963.

Six FISO fiber optic sensors, two per phase, were implanted in the coils of the transformer and a FISO Nortech-6 monitor was installed to record the readings. The hotspot locations were determined by the design team at Ohio Transformer, and the sensors were installed during the rewind process. All other temperatures recorded in this study were taken from standard thermocouples.

A 100 kW external oil cooler was obtained from SD Myers. This cooling unit consists of a 3 HP pump, 5.0 HP fans and a heat exchanger. The cooler is mounted on a portable trailer and includes hoses configured with check valves and quick connect fittings. The desired oil flow is achieved by throttling the flow with a valve on the discharge side of the pump.Nominally, this combination of components is rated by SD Myers for a flow rate of 50 GPM, with a capability of removing 340,000 BTU/hr.

An industry standard mineral oil and an ultra pure mineral oil manufactured by Petro-Canada with the trade name of Luminol were obtained from Ohio transformer. The transformer was first filled with standard mineral oil, tested, drained, refilled with Luminol, and then retested to obtain the efficiency comparison between the insulating oils used in combination with and without the external auxiliary oil cooler.

Study conditions and results


Study One…
Heat runs were initially conducted on the Allis Chalmers transformer (which had undergone design changes and was being rewound by Siemens Westinghouse) at the OA and FA ratings and then at 150% of the FA rating, or 11.25 MVA.While still at the 11.25 MVA level, the oil cooler was connected and temperatures were recorded until temperature stabilization was achieved. The cooler’s oil flow rate maintained for the initial run was 45 GPM. The observed temperature differential between the cooler’s inlet and outlet was consistently about 10 C degrees.

One of the fiber optic sensors stopped working early in the first heat run. The instrument displaying the fiber optic temperatures is capable of displaying four readings at a time. The temperatures recorded were taken one each from the outside windings and two from the center phase winding.

The warmest hot-spot temperature recorded while loaded to 11.25 MVA, and without the cooler operational, was 112 C on the center phase winding.When temperature stabilization was reached after the cooler was operational, this temperature had been reduced to 100 C. The magnitude of this temperature reduction was fairly consistent across all the sensors.

At the end of the first heat run with the cooler connected, the pump flow rate was increased to its maximum (estimated to be about 60 to 65 GPM) for one hour.No appreciable change was noted in the hotspot temperatures as a result of this, although there was a reduction of two degrees in the top-oil and average-oil rise temperatures. Had the test continued at this higher flow rate for a longer period, it is expected that the hot-spot temperature would have registered a similar decline.

The flow rate was then reduced to 20 GPM for a four-hour period. This resulted in an increase in the hot spot temperatures of approximately 4 C degrees.

0107_equipmentdesign_img4Study Two…
Heat runs were conducted on the GE transformer (that was being rewound by Ohio Transformer) at the OA and FA ratings and then at 150% of the FA rating, or 10.5 MVA, initially with the transformer filled with standard industry mineral oil and then repeated after draining the oil and re-filling with Luminol.While at the 10.5 MVA level and after the temperature stabilized, the oil cooler was connected and temperatures were recorded until they stabilized again. The cooler’s oil flow rate maintained for this study was 24 GPM.

The average hot-spot temperature recorded while loaded to the FA rating of 7 MVA, and without the cooler operational, was 92 C, using standard oil, and 87 C, using Luminol after stabilizing.When
temperature stabilization was reached after the cooler was operational, this temperature was reduced to 83 C, using standard oil, and 80 C, using Luminol. The magnitude of this temperature reduction was fairly consistent across all the sensors. The observed temperature differential between the cooler’s inlet and outlet varied between 8 and 14 C degrees, using standard oil, and between 11 and 18 C degrees, using Luminol.

The load was increased to the 10.5 MVA level, the oil cooler was connected, and temperatures were recorded until temperature stabilization was achieved. At this point, it was observed that the average hot-spot temperature of 140 C, in both cases, had been reduced to 127 C, using standard oil, and 115 C, using Luminol. The magnitude of this temperature reduction was fairly consistent across all the sensors. The observed temperature differential between the cooler’s inlet and outlet varied between 12 and 15 C degrees, using standard oil, and between 21 and 28 C degrees, using Luminol. (See Tables I & II and Figs. 2, 3, 4, 5, 6, 7.)







This study substantiates the benefit of employing an external oil cooler and the added benefit of using an ultra pure mineral oil (Luminol) in reducing a transformer’s hot spot temperature, thus preserving the life of the unit’s paper insulation. The relatively large internal oil quantities and large heatexchange surfaces of the transformers in this study result in relatively low internal oil and hot-spot temperatures.

Conversely, for a more modern unit with higher design temperatures, the expected temperature reduction with an external oil cooler could be even more impressive. However, the possibility of disrupted internal convection currents or diversion of oil from the transformers’ own radiators also would seem to be more likely because of the characteristically lower internal oil volumes. Consequently, a lower oil flow rate in the external cooler might be needed to avoid disrupting the transformer’s normal internal cooling pattern.

The transformer in Study One contained 1,920 gallons of oil, or 0.32 gallons per OA rated kVA, and the transformer in Study Two contained 1,300 gallons of oil, or 0.26 gallons per OA rated kVA. In a spot check of six transformers recently purchased by Consumers Energy, the lowest amount of oil found was 0.205 gallons per OA kVA rating. The SD Myers transformer maintenance guide reported in 1981 that some transformers had as little as 0.02 gallons per kVA.

In light of the significant variations in transformer oil volumes, flow to the external cooler may need to be tailored for the particular transformer involved. Besides possibly needing to modify the internal oil-cooling pattern, there also is a concern for creation of a vortex at the top hose connection. This would lead to air being sucked in and air bubbles being injected into the bottom of the transformer. A minimal oil level above the top hose connection must be maintained to avoid this or other possible measures must be adopted. MT

Noel Staszewski is a senior engineer in the Network Services Department of Consumers Energy.He has over 25 years of engineering experience in asset management and equipment maintenance in the utility industry, combined with additional experience in technology and product development, evaluation, reliability engineering and failure analysis of electronic components and systems in the automotive and computer industries. Telephone: (810) 760-3237; E-mail:

Mike Walker, a registered Professional Engineer in Michigan, spent 33 years in a number of engineering positions with Consumers prior to retiring in 2003. Since then, he has worked as an independent contractor for various companies. E-mail:

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Leak Detection: The Science And The Art

Fluids are always looking for a way out of a system. Whenever they find one, you end up with a leak. Whether it’s major or minor in scope, it’s sure to be a drain on your efficiencies and profits.


There’s both science and art when it comes to leak detection in industry. It’s science because leak detection is an engineering issue that requires very sophisticated tools and systems. It’s an art because successful leak detection is a matter of training, experience and management emphasis.

One of the country’s leading experts in all of this is Alan Bandes of UE Systems, based in Elmsford, NY. In a recent “Tech Tips Newsletter,” he notes that a good leak detection program in any company or any plant should involve walkarounds. “If you don’t perform a walk-around prior to performing a survey, “there will be a lot of potential unexpected problems regarding accessibility, equipment used and route planning.Maintenance management should encourage inspectors to perform a walk-around for the sake of efficiency and effectiveness,” he says.

What Bandes and other experts are warning against is too much reliance on automation–and not enough on management programs and planned surveys by trained maintenance personnel. As Allan Rienstra, of SDT North America, in Cobourg, Ontario, puts it, “The foundation of any leak management program is training. Ultrasound leak inspection is simple science, but like anything there are tricks to the trade that need to be learned.”That’s why SDT and UE , as well as others in the business offer extensive training to their customers and prospects. “Other ways to keep up,” adds Rienstra, “include attending industry conferences and reviewing consumer-based web sites.” Bandes’ newsletter is available on the Internet, as is SDT’s monthly Ultrawave Technology Report.

Some tech trends
While training and management emphasis are crucial for a successful leak detection program, there are some clear technology developments that maintenance experts need to watch in coming years.

“The technology is moving toward enhancing existing products with specialized features to improve leak detection activities,” says UE’s Bandes. “Ultrasound is used predominantly in the mid- to grossranges of leak detection where leak rates range from 1 x 10-3 std cc/sec on up. To assist on the fringes of detection, new specialized probes have been produced such as UE’s Close Focus Module which enhances low-level emissions making leaks near the low-end threshold more detectable.”

What about leak detection in areas where accessibility is difficult?

“New flexible probes have been developed that can be bent and manipulated at odd angles,” Bandes explains. That includes leaks in distant spots, like pressurized cables in ceilings. “Parabolic microphones,” he notes, “are used to pinpoint these leaks at greater distances than with standard scanning modules.”

What about special situations that require permanent or fixed monitoring?

According to Bandes, the industry is supplying remote mountable transducers that can be set for alarming if leaks either occur or exceed set threshold levels. Some of these specialized remote sensors are configured to detect leaks in valves with a 4-20 mA or 0-10V DC.Heterodyned output can be configured to send information to a control panel where the information can be viewed or recorded,” adds the UE executive.

Other companies in the business such as Monarch Instrument, of Amherst, NJ, SPM, in Marlborough, CT, and Whisper Ultrasonic Leak Detector, of East Syracuse, NY, also offer products for leak detection programs–and are constantly developing new ones for ever-more accurate and sensitive devices for leak detection.

Greenhouse gas quotas
SDT’s Rienstra notes other trends.”There’s a changed point of view in manufacturing regarding compressed air leak detection,” he says. “Compressed air leak management was predominantly done for energy efficiency because of the high cost of energy required to compress air. Average systems have between 30 and 35% leakage, if there is no program in place.A leak management program targets leak rates under 10%.”

As Rienstra noted in his article in the December 2006 UTILITIES MANAGER supplement to MAINTENANCE TECHNOLOGY, manufacturers are still after those energy savings (the challenge), but there is also a win because less energy consumption means fewer greenhouse gas emissions. In some countries companies have a greenhouse gas emission quota. If they are able to operate under that quota, they can save on emissions and even sell their leftover quota to others (the opportunity).

Agreeing with Bandes, Rienstra notes that there are two aspects here for maintenance management to consider: training and “the gadgets” (the art and the science). “We are all gadget-driven, he says. “Flexible wand sensors, parabolic dishes with laser pointers and extended distance sensors help make the leak inspector more efficient and provide him with extra levels of safety.”

Rienstra adds that leak calculators reflect another growing technical trend. His company will be releasing one this year that allows users to plug in the decibel level of a found leak. The calculator will then process all the data required to assign a dollar value to that leak.

Systematic approach and training
Of course, not all leaks are the same in terms of detection and control. Is it a specialized gas, compressed air, steam? What type of system or systems are to be monitored?

“What are the acceptable leak rates?” asks Bandes. “The first thing to do is to establish a baseline. Know what is going on with the system right now,” he advises. “Is the system performing as required? Companies should set a workable goal. For example, if compressed air leaks are the issue, review the use of compressed air; are there alternative technologies that can replace the use of air in some areas? Who will perform the leak survey? Above all,” he cautions,”these inspectors should have training, so training should be on the check-off list.”

Consider, too, the cost of a typical leak and how many you project in your plant: 10, 100, 1000? Walk through the system with a diagram or create a map of the system during the walk-through process. Ask what type of equipment will be needed: sophisticated or basic ultrasonic instruments? “The answers,” Bandes explains, “will be determined by the complexity of the system.”

A method of recording and reporting leak survey results, including costavoidance figures, should also be created. In addition, there should be a method of follow-up to assure the leaks are repaired properly. “Routes should be created that are manageable. Leak detection does not stop at the survey,” warns Bandes. “It should be routinely incorporated into maintenance planning.”

Educating employees can be a particularly cost-effective way to cut down on leaks. Explain to them the importance of your leak detection program and why they should report leaks when they notice them. Explain that the misuse (of air) can be very costly, and train them in the proper use of it.

Don’t feel as though you have to reinvent the wheel, either.When it comes to educating personnel on leak detection, you’ll find that there are numerous resources available through manufacturers of machinery, ultrasonic equipment suppliers and consultants. The U.S. Department of Energy also has information on its Web site for download.

Biggest leak detection mistakes
While leak detection seems a simple enough task, there are pitfalls. “The biggest mistake I see is venturing into a leak detection program without any strategy or written goals,” warns Rienstra. “Without team leaders,” he continues, “without training, without a guideline for how they will present their successes to upper management, any leak detection program is doomed to failure.”

According to Rienstra, as far as techniques go, far too often an inspector does his/her job and leaks are found and tagged, but there is no strategy in place to make sure things get fixed. If the goal is energy savings and greenhouse gas reduction, then the leak has to be fixed to save. “A found leak never saved a penny,” he says.

Bandes of UE adds, “The most common mistakes are lack of planning, lack of communication and insufficient training. Any program, whether it is leak detection or predictive maintenance, requires the support of management.” Don’t just start a program without planning it thoroughly. Bandes suggests the that you heed the following checklist:

  • Communicate with management and those who will be part of the program.
  • Explain the program, the methods and the goals.
  • Think through strategies of detection and route creation, reporting and recording results.
  • Have some plan for follow-up on repairs and carefully choose the instruments to be used in relation to the type of system to be inspected.

Remember that without the training of inspection personnel, your whole program can fail. To be successful, personnel need to know the effective methods for locating leaks, as well as how to work with competing ultrasounds in loud environments.

The science and the art
The science of leak detection gets more and more accurate and sophisticated every year. “Manufacturers are always looking for ways to increase the threshold of sensitivity (find smaller and smaller leaks). Probably the most important development aside from that would be software that maps out the inspection process and allows for accountability from the inspector to the repair,” says Rienstra. In other words, more and more automation is on the horizon for leak detection.

And, he adds, all leak detection is basically “dollar driven.”He notes, for exampler, that energy in California costs close to five times what it costs in other parts of the country. “You think compressed air leaks aren’t issues in that competitive state?”

The art of leak detection, however, is best summed up in the need for training and management emphasis and involvement. Bandes reminds us how vital it is to communicate with management. Leak detection and control have always been important engineering and production issues. These days, though, it is also too costly an issue (and an increasingly significant social issue as well).Any program to stop leaks is now too important to try to implement without management involvement, strategy, planning and (one more time) TRAINING.

No leak detection program will ever be perfect, but you can get closer and closer to perfect by concentrating on both the science and the art of it.

George Weimer is a professional writer based in Cleveland OH.

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

MT News

News of people and events important to the maintenance and reliability community

Rockwell Automation has named Christopher Zei vice president of its OEM initiative. A 21-year marketing, strategic development, operations and management veteran of technology-related companies, Zei was most recently the president/CEO of the North American operations at Schneider Electric’s ELAU group. There he was responsible for building the company’s reputation in North America. In his new role with Rockwell, Zei will be responsible for global sales, strategy and business development for all Rockwell Automation OEM-directed solutions.

QUALCOMM Incorporated, a provider of businessto- business wireless enterprise applications and services, has acquired Chicago-based nPhase LLC, a leading provider of machine-to-machine solutions that allow enterprises to manage and monitor widely dispersed, fixed machine assets. QUALCOMM has been a leader in machine-to-machine technology since 1991, when it introduced the first wide-area, wireless machine-to-machine applications offered with OmniTRACS®, the company’s first mobile communication system for transportation fleet management. nPhase specializes in fixed-asset telemetry, a complementary area of technology. It is nPhase’s fixed-asset machine-to-machine technology, along with its experience and established customer base that are expected to complement QUALCOMM’s success in the mobile machine-to-machine market and help increase both the breadth and depth of QUALCOMM’s enterprise machine-to-machine solutions portfolio. nPhase, though, will continue to offer products and services under its own brand.

Emerson Process Management has announced that Gordon McFarland, senior power plant performance analyst with the company’s North American Power & Water solutions division, recently received the ISA’s Standards & Practices Award. The award recognizes McFarland for his leadership in the initial development of fossil power plant standards, and for 25 years of continuous support and direction of those standards. During his 37-year career, 26 of them with Emerson,McFarland has made significant contributions to the creation, development and application of technical standards and practices for the power generation industry.

Ferraz Shamut has announced two personnel appointsments. Mark Taylor has joined the company as its new vice president of OEM Sales, and Dean Cousins assumes the newly created position of vice president of Business Development.

Taylor will be responsible for all of Ferraz Shawmut’s OEM sales operations. Educated in the United Kingdom, Taylor holds degrees from West Bromwich College of Commerce & Technology,Aston University and Birmingham University.

Cousins, a Ferraz Shawmut employee for more than 23 years, will assist the company in defining and executing its new external and internal growth strategies. Additionally, he will continue his responsibility for the sales and marketing of Ferraz Shawmut’s Traction,High Power Switch and over voltage protection and system products.

The Valve Manufacturers Association (VMA) Board of Directors has approved a new Education & Training program, which will focus on basic valve training for new employees at valve and actuator companies. The first component of the program is expected to launch later this year.

The initiative came out of discussions with VMA members who are concerned that with many long-time industry professionals on the verge of retirement, a much basic knowledge about valves and actuators may be lost. In addition, many recently graduated engineering students are not receiving a comprehensive valve education, so VMA intends to share the introductory valve education program with colleges, universities and other educational institutions.

A newly appointed VMA Education & Training Committee— made up of experienced managers from a variety of valve and actuator manufacturers and suppliers, large and small—will guide the development of the program. For more information, visit

In the program brochure for the 2007 MAINTENANCE & RELIABILITY TECHNOLOGY SUMMIT, (MARTS), the terms “Maintenance Technician Effectiveness” and “Overall Maintenance Effectiveness,” as well as their acronyms, MTE and OME, should have been identified with the registered service mark symbol (SM). All of these terms are service marks of LAI Reliability Systems, Inc.

Planned Refinery Unit Turnarounds Will Continue To Decrease In 2007
According to research by Industrial Info Resources (Sugar Land, TX), the number of planned unit maintenance shutdowns for the North American Petroleum Refining Industry will decrease in 2007, marking the second year in a row for declining planned maintenance. This trend is forecast to change in 2008 as refiners schedule maintenance shutdowns to coincide with the first wave of unit additions associated with an industry-wide plan to increase refining capacity.

A number of key issues have combined to reduce the number of planned refinery unit turnarounds over the past two years including, labor shortages, prolonged long-lead equipment delivery times, hurricanes, and strong profit margins. After hurricane Katrina shut down a good portion of U.S. Gulf Coast refining capacity in September 2005, the White House asked U.S. refiners to postpone scheduled maintenance in order to keep production at a high level. That trend has continued today. The number of units scheduled for planned maintenance repairs during the second half of 2006 at refineries located in North America is down by 8% as compared to the same period in 2005. This decrease in activity is arguably being attributed to several different events that occurred over the last year starting with hurricane Katrina.

Labor shortages play a role
Some maintenance projects are being delayed and rescheduled because of a shortage of labor coming from skilled craftsmen such as iron workers, millwrights, pipefitters and electricians. Companies that provide personnel for construction, as well as equipment service providers, are having difficulties meeting demand. Historically, slow petrochemical construction markets over the past decade led to a downsizing of the service industry.

Now,with industrial project activity picking up significantly, not only in the petrochemical sector, but across most sectors such as Power and Metals & Minerals Industries, equipment and service providers are having difficulty keeping up with increased demand. Long-lead delivery times are out as far as two years for some equipment such as pressurized reactors and vessels, and the labor pool is running thin. Deer Park Refining LP (Deer Park, Texas) rescheduled a $35-million fall 2006 turnaround to January 2007.


Other factors
Another factor that may have contributed to the decrease is that there were a significant amount of shutdowns scheduled earlier last year in order for refineries to upgrade process units to produce ultra-low sulfur diesel (ULSD) by the mandated June 2006 deadline. A majority of the refiners scheduled ULSD project tie-ins during this timeframe, resulting in some turnarounds that were originally scheduled for 2007 to occur in 2006. (Refer to the chart above for the breakdown of planned unit turnarounds by market region over a five-year period from 2003-07.)

For 2007, there are currently 257 units planned to be taken offline for maintenance repair and overhaul. That’s a decrease of 21% when compared to the 328 units that went down for repair in 2006.

Opportunities in unscheduled repair services
In addition to planned maintenance, there are also opportunities to provide services for unscheduled repairs. Since 2003, there has been an average of 65 process units per month that have been shut down for unplanned reasons. A majority of the units were shut down due to a glitch in the process, but other reasons include fires, hurricane preparation and economic slowdowns.

Looking beyond 2008, refinery maintenance activity is forecast to increase significantly. In a continentwide trend to increase refining capacity and improve unit efficiencies, the nation’s refineries are planning a large number of unit additions, expansions and upgrades. Over the past year, Industrial Info has reported 430 projects at U.S. petroleum refineries, with a total investment value of $19.8 billion. Scheduled construction starts for these projects range between November 2005 and April 2012.

About Industrial Info Resources
Industrial Info Resources (IIR) is a Marketing Information Service company that has been doing business for over 23 years. IIR is respected as the leader in providing comprehensive market intelligence pertaining to the industrial processing, heavy manufacturing and energyrelated industries throughout the world. For additional information, send inquiries to refininggroup@, or visit the organization online at


Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Thermography At Ford's Dearborn Stamping Plant

While Ford’s Dearborn Stamping Plant (DSP) had thermal cameras on site in the past, it had not met the objectives of a successful thermography program. Today, though, the plant’s thermography program is a model for the rest of Ford, and it came on line in just a matter of weeks.


Lately, there has been a stream of visitors to the Dearborn Stamping Plant (DSP) housed in the historic Ford Rouge Center in Dearborn,Michigan. What’s the attraction?

The DSP operation, which manufactures sub-assembly doors and hoods for the popular Ford F-150 pickup truck, achieved a perfect score in a recent independent audit of its weld effectiveness. That makes the plant best in company- perhaps the best period-when it comes to the precision with which it forms and welds sub-assemblies. Executives from Ford Motor Company corporate offices and management from other Ford operations want to know how they do it.

A significant factor is a condition-monitoring program using thermography or thermal imaging. Thermography itself isn’t new to DSP, or new to Ford operations, but the DSP thermography program is unique. After only 30 days, the program scored higher in an insurance audit than any other Ford thermography program had ever scored. It continued to work even when new thermography team members came on board.

In the best tradition of Ford’s commitment to continuous improvement, DSP’s thermography program keeps getting better. And possibly the most distinctive aspect: the program is designed around a systems approach and supported by a systems solution. That means it has the potential to be quickly and easily replicated and the possibility of being deployed with equal success and equivalent return on investment in any Ford operation.

The DSP operation
The Dearborn Stamping Plant occupies much of a twostory building that has supported the manufacture of Ford vehicles since the 1930s. The first floor houses various subassembly lines to support the large number of F-150 styles, where inner and outer door and hood panels are spotwelded together. Before that welding can happen, sheet steel must be formed into panels using four presses located on the second floor of the facility. This includes a new very efficient Schuler Extra Large “A” Transfer Press with five successive slides (presses).

Plant manager Frank Piazza approached process engineer Jim Jackson and asked him to develop a new thermographic process capable of meeting the requirements. The program had to be user-friendly, maintainable, replicable and, most importantly, reliable.

“When a press goes down, eventually door lines shut down,” Jackson explains. “Ford makes a total of about 135 F-150 trucks an hour at its three assembly plants. If we were to have a catastrophic failure,we only have three days before we shut down everything (all three plants).”

pdmstrategies2Jackson says that before the present thermography program was in place, DSP lost a press for five days.”It was not a pretty picture,” he recalls. “When the lines are down and not producing parts, the company is still incurring costs. Costs mount up fast.” Thermography had been used to assist in determining the root cause of the failure, but the plant’s original thermography program wasn’t doing the job adequately. Since the new thermography program began, there have been no such incidents at DSP. In other words, the new program works.

What’s different at DSP?
The success of the program Jim Jackson put together with the help of John Lafeber, a thermal imaging consultant and a manufacturer’s representative for IR cameras, has several unique elements:

  • Skilled crafts trained in thermography…
    At the heart of DSP’s thermal imaging program is Ford Motor Company’s decision early on to use skilled trades to do thermography.He says, “We chose to go with members of the skilled trades (electricians, weld fixture repair specialists, etc.),” says Jackson. “We realized that once these people were trained and certified in using an infrared camera, they would know what a thermal image meant and how it would impact the process.”
  • Autonomous maintenance…
    Jackson also believed that to be successful the thermography program needed to empower its thermographers to make decisions without the interference or second-guessing of anyone, including management. “I knew that they (the tradespeople) are the experts about how the equipment and processes function,” he says. “So, we did what everybody talks about. We empowered people, the experts, to make decisions about what needs to be fixed and when to do it.”The thermographers do the inspections,write the work orders, publish the reports and complete the follow-up to ensure that the concern is addressed and management is informed of the status of repairs and all associated issues.
  • A “lean” operation… A product of Ford’s overall commitment to continuous improvement and Jackson’s embracing of the thinking of James Womack [1], DSP’s thermography program is what Jackson calls a lean operation. This is one in which something is “done only once.” As a model, Jackson cites the way many modern retail operations do inventory: bar codes, databases and hand-held specialty computers with scanners-one step and no paper.

Immediate and ongoing successes
Everything came together for the new thermography program on a Monday morning two years ago. Rick Cox, an electrician, and Hassan Koussan, a specialist in weld-fixture repair,were equipped with IR cameras and Pocket PCs. That same day, Jackson received word that plant manager Piazza wanted to see “what he had paid all this money for” at a Wednesday morning meeting. This was just two days into Cox’s and Koussan’s training on using the new systems solution for the DSP thermography program.With the support of Jackson, their maintenance mentor (or in lean terms, their “Sensei”), Cox and Koussan went to the Wednesday meeting and reported on their results. Piazza liked what he saw and heard.

Cox’s initial thermographic responsibilities were to be on the press floor. Koussan, the weldfixture expert, was assigned the welding lines in the assembly area. At the end of the first month of the program, there was an insurance audit. DSP’s thermography program scored higher than any other Ford thermography program had ever scored–and that was only the beginning. Over the first two years of the program, there have been many audits: insurance, ISO and thirdparty weld audits. The results have always been the same: the best thermography program the auditors have ever seen.

The weld audits are semi-annual events at DSP. An independent auditor performs them in order to verify the integrity of the welds performed on door and hood sub-assemblies. In part, these audits are intended to confirm that DSP is meeting Federal Motor Vehicle Safety Standards for welding, including confirmation that safety-critical welds, called delta welds, are sound.

In March 2006, an independent audit performed on welds at DSP concluded: “The Dearborn Stamping Plant weld quality percentage is outstanding. The overall weld effectiveness is 100 percent. The group effectiveness is 100 percent.”

William “Bill”Bushey, weld engineer at DSP, agrees that the audit exceeded expectation: “We went from 98% weld quality, which was the best that we could hope for, to 100% weld quality.We were perfect in the audit, and now we are challenged to maintain that level of performance.”

Bruce Dudley, DSP’s manager of engineering, puts the accomplishment into perspective. He points out that if one did an SPC (statistical process control) analysis based on the number of doors produced at DSP, even by the very best worldclass standards for quality, a small percentage of defects could be expected. Still, the audit at DSP found 100% weld effectiveness.pdmstrategies4

In other words, the welding operations at DSP got a perfect score, which means that F-150 door and hoods are safe and of superior quality.Managers interviewed for this article attribute these audit results to the thermography program.

How DSP does thermography
In addition to thermal cameras, the DSP thermal imaging system also has an IR reporting database program (Lean DB from Thermal Trend) that lists on desktop PCs each piece of equipment that thermographers visit on inspection routes. The equipment is listed in the order it is thermally scanned. In addition, the same routes are loaded into each thermographer’s Pocket PC, which he carries with him during inspections, constantly updating the database for downloading into his desktop computer back in his office. When practical, each piece of equipment on an inspection route is bar coded, and the bar code is scanned into the Pocket PC at each inspection. This move ensures that the data collected is assigned to the correct asset (piece of equipment) and that no asset scheduled for inspection is missed. The equipment bar codes for the thermography program are linked to the plant’s maintenance management system to aid in tracking the work order process/concerns for each piece of equipment/system that is inspected.

Some experts speculate that thermographers typically spend more than 25% of their time doing reports. In the DSP system, reports are essentially complete when a thermographer enters data into his Pocket PC on the plant floor. The data is entered into screens formatted just the way a thermographer needs them to be organized. In fact, the software actually prompts the thermographer to enter the data needed (temperatures, loads, etc.) at each inspection site. The data is downloaded into a relational database, where it can be used for any purpose (e.g., reports) by anyone with access to the network.

The process just described supports the philosophy of a lean operation. Data gathered on the plant floor goes directly into the system without further manipulation. Therefore, documentation is always 100% up-to-date-and paperless. Plans are already in the works to make the system wireless, eliminating the transfer of data “manually” back in the office.

0107_pdmstrategies_img5Quick training, easy transitions
Because DSP’s thermographers are skilled tradespeople first and thermographers second, their training took only 30 intense days. The fact that the relational database they use is self-guiding and intuitive helped to speed up the process.

Since the initial launch of this thermography program, Cox has moved on to be the plant configuration administrator responsible for keeping track of all the equipment in the plant. His replacement is Chuck Larabell. Like Cox, Larabell is an electrician, and he, too, went from electrician to productive thermographer in about 30 days. >Larabell was able to assume Cox’s routes and database and learn the thermography requirements, and has since made his own route additions to the original. Koussan is now the sensei/mentor for a skilled trade person from another Ford plant implementing a lean thermography program, and all concerned expect the transition to take about a month, too.

Jackson speculates that any operation willing to invest in thermal imagers, Pocket PCs, the required software and intense 30- day training for industrious, skilled craftspeople can create a successful, lean thermography program if it is willing to support and empower the thermographers to do the job.”It takes a systems approach,” he asserts. “We bought and implemented a system solution.”

Day-to-day thermography at DSP
The two main areas of the plant where thermography is performed are the press floor and the assembly floor. Respectively, Larabell and Koussan do thermography there, Monday through Friday.While the thermographers themselves may immediately fix a problem they discover,more often than not, the repairs are done on a third shift set aside for maintenance, or on weekends when there usually is limited or no production.

When the thermal camera reveals a problem on an inspection route, Larabell and Koussan save an image so they can include it in an e-mail report when they return to their office. The report, sent to a list of recipients that includes everyone from plant manager Piazza to operators on the plant floor, also incorporates a work order number to ensure that there is correlation between the report and any necessary repairs done as a result. After the repairs are made, the thermographers receive a report to that effect, which alerts them to go back and verify that the repair was done effectively.

0107_pdmstrategies_img6At DSP, reports are tools. They are sent on a daily basis to the teams responsible for repairing equipment. A simple report is also printed weekly for management. It shows problems that have been found and problems that have been resolved. Other special reports are printed when needed. To gain a better understanding of DSP’s thermography program, look briefly at how Larabell and Koussan do their work:

On the press floor… Larabell, who does thermography full time throughout the plant, monitors the four presses on two-week intervals. He concentrates on the electrical panels, each of which has an identifying bar code on the outside and scores of electrical contacts and components inside. He also scans motors, valves and other components, looking for problems and potential problems.

The database in Larabell’s Pocket PC includes the normal running conditions for the equipment scanned. As a result, he can compare current operational values to “what ought to be.” Furthermore, since every panel and every piece of equipment (where practical) has a bar code, it gets checked off in the database following scanning.

0107_pdmstrategies_img7On the assembly floor… Koussan monitors approximately 500 welding guns in the assembly area plus the related electrical panels and other equipment. These responsibilities translate into 1,500 pieces of equipment in his PC’s database. “Weld guns get checked at least once a month,” he says. “For other equipment, we know the history of incidents on each piece, and we set our inspection frequencies based on that.”

Koussan typically comes to work four hours before the end of the first (midnight) shift. This allows him to monitor the weld guns when they are not in production, using instruments other than a thermal imager to collect and trend data that may help him detect potential problems.

Once production starts on the first shift, Koussan begins using thermography on the assembly processes. He “shoots”weld guns, transformers, shunts and their cabling, weld control panels and electrical panels.

Since he comes in at the end of the maintenance shift,Koussan is in a position to ask if problems from the previous day have been fixed. An affirmative answer sends him to the location with his IR camera to confirm that repairs have been made successfully. “Ninety-nine percent of the time a problem is fixed that night,” he says. “If it’s not, then we have to follow up on it the next day.”

Looking ahead
Everyone at DSP interviewed for this article expressed a commitment to continuous improvement in all aspects of DSP’s operations, including the thermography program. One tool for achieving continuous improvement in thermography is a bi-monthly meeting of the thermography team, which includes Lafeber.

“We hold these meetings to help us understand the program and how to improve it,” Lafeber acknowledges. He further explains that suggestions for improvements often result from thinking about thermography as a lean process. Going wireless, for example,will eliminate a step in the reporting process.

One significant initiative that has come out of “thinking lean” is a proposal to rewrite the database software to support a continuous flow model rather than the traditional batch and queue way of operating. “In the past, thermography has been done on a batch and queue basis-a batch of inspections, then a batch of reports, then a batch of repairs,” Lafeber notes. “What is more efficient is to do continuous flow thermography. The original database used by DSP was designed for batch and queue. DSP has written specifications for new software to take better advantage of continuous flow thermography. It will reduce the time from problem detection to repair and make better use of the data in the database.”

Wayne Little, facilities supervisor at DSP, is quick to point out that compared to other Ford stamping plants, DSP’s yields are higher, even though the plant runs fewer presses for fewer hours. In fact, the yields are up–even on the presses that were there before DSP installed the relatively new five-slide Schuler. MT

John Pratten III is an ANST compliant, Level II trained thermographer with Fluke. He conducts training for customers, including a number of Fortune 500 companies, in the fundamentals of IR and how to set up quality PdM programs.He also performs thermography work and training in building science, including working closely with and training various state agencies involved with projects to improve the quality of low-income housing. E-mail:


1. James P. Womack is the founder and chairman of the Lean Enterprise Institute, a nonprofit educational and research organization chartered in 1997, to advance a set of ideas known as lean production and lean thinking.

Continue Reading →