Archive | January


8:09 pm
April 29, 2009
Print Friendly

Going Wireless: Wireless Technology Is Ready For Industrial Use

Wireless works in a plant, but you’ll want to be careful regarding which “flavor” you choose

Wireless Technology now provides secure, reliable communication for remote field sites and applications where wires cannot be run for practical or economic reasons. For maintenance purposes, wireless can be used to acquire condition monitoring data from pumps and machines, effluent data from remote monitoring stations, or process data from an I/O system.

For example, a wireless system monitors a weather station and the flow of effluent leaving a chemical plant. The plant’s weather station is 1.5 miles from the main control room. It has a data logger that reads inputs from an anemometer to measure wind speed and direction, a temperature gauge and a humidity gauge. The data logger connects to a wireless remote radio frequency (RF) transmitter module, which broadcasts a 900MHz, frequency hopping spread spectrum (FHSS) signal via a YAGI directional antenna installed at the top of a tall boom located beside the weather station building. This posed no problem.

However, the effluent monitoring station was thought to be impossible to connect via wireless. Although the distance from this monitoring station to the control room is only one-quarter mile, the RF signal had to pass through a four-story boiler building. Nevertheless, the application was tested before installation, and it worked perfectly. The lesson here is that wireless works in places where you might think it can’t. All you have to do is test it.

There are many flavors of wireless, and an understanding is needed to determine the best solution for any particular application.Wireless can be licensed or unlicensed, Ethernet or serial interface, narrow band or spread spectrum, secure or open protocol,Wi-fi…the list goes on. This article provides an introduction to this powerful technology.

The radio spectrum
The range of approximately 9 kilohertz (kHz) to gigahertz (GHz) can be used to broadcast wireless communications. Frequencies higher than these are part of the infrared spectrum, light spectrum, X-rays, etc. Since the RF spectrum is a limited resource used by television, radio, cellular telephones and other wireless devices, the spectrum is allocated by government agencies that regulate what portion of the spectrum may be used for specific types of communication or broadcast.

In the United States, the Federal Communications Commission (FCC) governs the allocation of frequencies to non-government users. FCC has limited the use of Industrial, Scientific, and Medical (ISM) equipment to operate in the 902-928MHz, 2400-2483.5MHz and 5725-5875MHz bands,with limitations on signal strength, power, and other radio transmission parameters. These bands are known as unlicensed bands, and can be used freely within FCC guidelines. Other bands in the spectrum can be used with the grant of a license from the FCC. (Editor’s Note: For a quick definition of the various bands in the RF spectrum, as well as their uses, log on to: http://encyclopedia.thefreedictionary. com/radio+frequency )

Licensed or unlicensed
A license granted by the FCC is needed to operate in a licensed frequency. Ideally, these frequencies are interference-free, and legal recourse is available if there is interference. The drawbacks are a complicated and lengthy procedure in obtaining a license, not having the ability to purchase off-the-shelf radios since they must be manufactured per the licensed frequency, and, of course, the costs of obtaining and maintaining the license.


License-free implies the use of one of the frequencies the FCC has set aside for open use without needing to register or authorize them. Based on where the system will be located, there are limitations on the maximum transmission power. For example, in the U.S., in the 900MHz band, the maximum power may be 1 Watt or 4 Watts EIRP (Effective Isotropic Radiated Power).

The advantages of using unlicensed frequencies are clear: no cost, time or hassle in obtaining licenses; many manufacturers and suppliers who serve this market; and lower startup costs, because a license is not needed. The drawback lies in the idea that since these are unlicensed bands, they can be “crowded” and, therefore, may lead to interference and loss of transmission. That‘s where spread spectrum comes in. Spread spectrum radios deal with interference very effectively and perform well, even in the presence of RF noise.

Spread spectrum systems
Spread Spectrum is a method of spreading the RF signal across a wide band of frequencies at low power, versus concentrating the power in a single frequency as is done in narrowband channel transmission. Narrowband refers to a signal which occupies only a small section of the RF spectrum, whereas wideband or broadband signal occupies a larger section of the RF spectrum. The two most common forms of spread spectrum radio are frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS). Most unlicensed radios on the market are spread spectrum.

As the name implies, frequency hopping changes the frequency of the transmission at regular intervals of time. The advantage of frequency hopping is obvious: since the transmitter changes the frequency at which it is broadcasting the message so often, only a receiver programmed with the same algorithm would be able to listen and follow the message. The receiver must be set to the same pseudo-random hopping pattern, and listen for the sender’s message at precisely the correct time at the correct frequency. Fig. 1 shows how the frequency of the signal changes with time. Each frequency hop is equal in power and dwell time (the length of time to stay on one channel). Fig. 2 shows a two dimensional representation of frequency hopping, showing that the frequency of the radio changes for each period of time. The hop pattern is based on a pseudo random sequence.


DSSS combines the data signal with a higher data-rate bit-sequence-also known as a ‘chipping code’-thereby “spreading” the signal over greater bandwidth. In other words, the signal is multiplied by a noise signal generated through a pseudo-random sequence of 1 and -1 bits. The receiver then multiplies the signal by the same noise to arrive at the original message (since 1 x 1 = 1 and -1 x -1 = 1).

When the signal is “spread,” the transmission power of the original narrowband signal is distributed over the wider bandwidth, thereby decreasing the power at any one particular frequency (also referred to as low power density). Fig. 3 shows the signal over a narrow part of the RF spectrum. In Fig. 4, that signal has been spread over a larger part of the spectrum, keeping the overall energy the same, but decreasing the energy per frequency. Since spreading the signal reduces the power in any one part of the spectrum, the signal can appear as noise. The receiver must recognize this signal and demodulate it to arrive at the original signal without the added chipping code. FHSS and DSSS both have their place in industry and can both be the “better” technology based on the application. Rather than debating which is better, it is more important to understand the differences, and then select the best fit for the application. In general, a decision involves:

  • Throughput
  • Colocation
  • Interference
  • Distance
  • Security

Throughput is the average amount of data communicated in the system every second. This is probably the first decision factor in most cases. DSSS has a much higher throughput than FHSS because of a much more efficient use of its bandwidth and employing a much larger section of the bandwidth for each transmission. In most industrial remote I/O applications, the throughput of FHSS is not a problem.

As the size of the network changes or the data rate increases, this may become a greater consideration. Most FHSS radios offer a throughput of 50-115 kbps for Ethernet radios.Most DSSS radios offer a throughput of 1-10 Mbps. Although DSSS radios have a higher throughput than FHSS radios, one would be hard pressed to find any DSSS radios that serve the security and distance needs of the industrial process control and SCADA market. Unlike FHSS radios, which operate over 26MHz of the spectrum in the 900MHz band (902-928MHz), and DSSS radios, which operate over 22MHz of the 2.4GHz band, licensed narrow band radios are limited to 12.5kHz of the spectrum.Naturally, as the width of the spectrum is limited, the bandwidth and throughput will be limited as well.Most licensed frequency narrowband radios offer a throughput of 6400 to 19200 bps.

Collocation refers to having multiple independent RF systems located in the same vicinity. DSSS does not allow for a high number of radio networks to operate in close proximity as they are spreading the signal across the same range of frequencies. For example, within the 2.4GHz ISM band, DSSS allows only three collocated channels. Each DSSS transmission is spread over 22MHz of the spectrum, which allows only three sets of radios to operate without overlapping frequencies.

FHSS, on the other hand, allows for multiple networks to use the same band because of different hopping patterns. Hopping patterns which use different frequencies at different times over the same bandwidth are called orthogonal patterns. FHSS uses orthogonal hopping routines to have multiple radio networks in the same vicinity without causing interference with each other. That is a huge plus when designing large networks, and needing to separate one communication network from another. Many lab studies show that up to 15 FHSS networks may be collocated, whereas only 3 DSSS networks may be collocated. Narrowband radios obviously cannot be collocated as they operate on the same 12.5MHz of the spectrum.

Interference is RF noise in the vicinity and in the same part of the RF spectrum. A combining of the two signals can generate a new RF wave or can cause losses or cancellation in the intended signal. Spread Spectrum in general is known to tolerate interference very well, although there is a difference in how the different flavors handle it.When a DSSS goingwireless4receiver finds narrowband signal interference, it multiplies the received signal by the chipping code to retrieve the original message. This causes the original signal to appear as a strong narrow band; the interference gets spread as a low power wideband signal and appears as noise, and thus can be ignored.

In essence, the very thing that makes DSSS radios spread the signal to below the noise floor is the same thing that allows DSSS radios to ignore narrowband interference when demodulating a signal. Therefore, DSSS is known to tolerate interference very well, but it is prone to fail when the interference is at a higher total transmission power, and the demodulation effect does not drop the interfering signal below the power level of the original signal.

Given that FHSS operates over 83.5MHz of the spectrum in the 2.4GHz band, producing high power signals at particular frequencies (equivalent to having many short synchronized bursts of narrowband signal) it will avoid interference as long as it is not on the same frequency as the narrowband interferer.Narrowband interference will, at most, block a few hops which the system can compensate for by moving the message to a different frequency. Also, the FCC rules require a minimum separation of frequency in consecutive hops, and therefore the chance of a narrowband signal interfering in consecutive hops is minimized.

When it comes to wideband interference, DSSS is not so robust. Since DSSS spreads its signal out over 22MHz of the spectrum all at once at a much lower power, if that 22MHz of the spectrum is blocked by noise or a higher power signal, it can block 100% of the DSSS transmission, although it will only block 25% of the FHSS transmission. In this scenario, FHSS will lose some efficiency, but not be a total loss.

In licensed radios the bandwidth is narrow, so a slight interference in the range can completely jam transmission. In this case, highly directional antennas and band pass filters may be used to allow for uninterrupted communication, or legal action may be pursued against the interferer.

802.11 radios are more prone to interference since there are so many readily available devices in this band. Ever notice how your microwave interferes with your cordless phone at home? They both operate in the 2.4GHz range, the same as the rest of 802.11 devices. Security becomes a greater concern with these radios.

If the intended receiver of a transmitter is located closer to other transmitters and farther from its own partner, it is known as a Near/Far problem. The nearby transmitters can potentially drown the receiver in foreign signals with high power levels. Most DSSS systems would fail completely in this scenario. The same scenario in a FHSS system would cause some hops to be blocked but would maintain the integrity of the system. In a licensed radio system, it would depend on the frequency of the foreign signals. If they were on the same or close frequency, it would drown the intended signal, but there would be recourse for action against the offender unless they have a license as well.

Distance is closely related to link connectivity, or the strength of an RF link between a transmitter and a receiver, and at what distance they can maintain a robust link. Given that the power level is the same, and the modulation technique is the same, a 900MHz radio will have higher link connectivity than a 2.4GHz radio. As the frequency in the RF spectrum increases, the transmission distance decreases if all other factors remain the same. The ability to penetrate walls and object also decreases as the frequency increases.Higher frequencies in the spectrum tend to display reflective properties. For example, a 2.4GHz RF wave can bounce off reflective walls of buildings and tunnels. Based on the application, this can be used as an advantage to take the signal farther, or it may be a disadvantage causing multipath, or no path, because the signal is bouncing back.

FCC limits the output power on spread spectrum radios. DSSS consistently transmits at a low power, as discussed above, and stays within the FCC regulation by doing so. This limits the distance of transmission for DSSS radios, and thus this may be a limitation for many of the industrial applications. FHSS radios, on the other hand, transmit at high power on particular frequencies within the hopping sequence, but the average power on the spectrum is low, and therefore can meet with the regulations. Since the actual signal is transmitting at a much higher power than the DSSS, it can travel further.Most FHSS radios are capable of transmitting over 15 miles, and longer distances with higher gain antennas.

802.11 radios, although available in both DSSS as well as FHSS, have a high bandwidth and data rate, up to 54Mbps (at the time of this publication). But it is important to note that this throughput is for very short distances, and downgrades very quickly as the distance between the radio modems increases. For example, a distance of 300 feet would drop the 54Mbps rate down to 2Mbps. This makes this radio ideal for a small office or home application, but not for many industrial applications where there is a need to transmit data over several miles.

Since narrowband radios tend to be a lower frequency, they are a good choice in applications where FHSS radios cannot provide adequate distance. A proper application for narrow band licensed radios is when there is a need to use a lower frequency to either travel over a greater distance, or be able to follow the curvature of the earth more closely and provide link connectivity in areas where line of sight is hard to achieve.

Since DSSS signals run at such low power, the signals are difficult to detect by intruders. One strong feature of DSSS is its ability to decrease the energy in the signal by spreading the energy of the original narrowband signal over a larger bandwidth, thereby decreasing the power spectral density. In essence, this can bring the signal level below the noise floor, thereby making the signal “invisible” to would-be intruders. On the same note, however, if the chipping code is known or is very short, then it is much easier to detect the DSSS transmission and retrieve the signal since it has a limited number of carrier frequencies. Many DSSS systems offer encryption as a security feature, although this increases the cost of the system and lowers the performance, because of the processing power and transmission overhead for encoding the message.

For an intruder to successfully tune into a FHSS system, he needs to know the frequencies used, the hopping sequence, the dwell time and any included encryption. Given that for the 2.4GHz band the maximum dwell time is 400ms over 75 channels, it is almost impossible to detect and follow a FHSS signal if the receiver is not configured with the same hopping sequence, etc. In addition, most FHSS systems today come with high security features such as dynamic key encryption and CRC error bit checking.

Today,Wireless Local Area Networks (WLAN) are becoming increasingly popular. Many of these networks use the 802.11 standard, an open protocol developed by IEEE.Wi-fiis a standard logo used by the Wireless Ethernet Compatibility Alliance (WECA) to certify 802.11 products. Although industrial FHSS radios tend to not be Wi-fi, and therefore not compatible with these WLANs, there may be a good chance for interference due to them operating in the same bandwidth. Since most Wi-fiproducts operate in the 2.4 or 5GHz bands, it may be a good idea to stick with a 900MHz radio in industrial applications, if the governing body allows this range (Europe allows only 2.4GHz, not 900MHz). This will also provide an added security measure against RF sniffers (a tool used by hackers) in the more popular 2.4 band.

Security is one of the top issues discussed in the wireless technology sector. Recent articles about “drive-by hackers” have left present and potential consumers of wireless technology wary of possible infiltrations. Consumers must understand that 802.11 standards are open standards and can be easier to hack than many of the industrial proprietary radio systems.

The confusion about security stems from a lack of understanding of the different types of wireless technology. Today, Wi-fi(802.11a, b, and g) seems to be the technology of choice for many applications in the IT world, homes and small offices. 802.11 is an open standard in which many vendors, customers and hackers have access to the standard.While many of these systems have the ability to use encryption like AES and WEP, many users forget or neglect to enable these safeguards which would make their systems more secure.Moreover, features like MAC filtering can also be used to prevent unauthorized access by intruders on the network. Nonetheless, many industrial end users are very wary about sending industrial control information over standards that are totally “open.”

So, how do users of wireless technology protect themselves from infiltrators? One almost certain way is to use non- 802.11 devices that employ proprietary protocols that protect networks from intruders. Frequency hopping spread spectrum radios have an inherent security feature built into them. First, only the radios on the network that are programmed with the “hop pattern” algorithm can see the data. Second, the proprietary, non-standard, encryption method of the closed radio system will further prevent any intruder from being able to decipher that data.

The idea that a licensed frequency network is more secure may be misleading. As long as the frequency is known, anyone can dial into the frequency, and as long as they can hack into the password and encryption, they are in. The added security benefits that were available in spread spectrum are gone since licensed frequencies operate in narrowband. Frequency hopping spread spectrum is by far the safest, most secure form of wireless technology available today.

Mesh radio networks
Mesh radio is based on the concept of every radio in a network having peer-topeer capability. Mesh networking is becoming popular since its communication path has the ability to be quite dynamic. Like the worldwide Web, mesh nodes make and monitor multiple paths to the same destination to ensure that there is always a backup communication path for the data packets.

There are many concerns that developers of mesh technology are still trying to address, such as latency and throughput. The concept of mesh is not new. The internet and phone service are excellent mesh networks based in a wired world. Each node can initiate communication with another node and exchange information.

In conclusion, the choice of radio technology to use should be based on the needs of the application. For most industrial process control applications, proprietary protocol license-free frequency hopping spread spectrum radios (Fig. 5) are the best choice because of lower cost and higher security capabilities in comparison to licensed radios.When distances are too great for a strong link between FHSS radios with repeaters, then licensed narrowband radios should be considered for better link connectivity. The cost of licensing may offset the cost of installing extra repeaters in a FHSS system.

As more more industrial applications require greater throughput, networks employing DSSS that enable TCP/IP and other open Ethernet packets to pass at higher data rates will be implemented. This is a very good solution where PLCs (Programmable Logic Controllers), DCS (Distributed Control Systems) and PCS (Process Control Systems) need to share large amounts of data with one another or upper level systems like MES (Manufacturing Execution Systems) and ERP (Enterprise Resource Planning) systems.

When considering a wireless installation, check with a company offering site surveys that allow you to install radios at remote locations to test connectivity and throughput capability. Often this is the only way to ensure that the proposed network architecture will satisfy your application requirements. These demo radios also let you look at the noise floor of the plant area, signal strength, packet success rate and the ability to identify if there are any segments of the license free bandwidth that are currently too crowded for effective communication throughput. If this is the case, then hop patterns can be programmed that jump around that noisy area instead of through it. MT

Gary Mathur is an applications engineer with Moore Industries-International, in North Hills, CA. He holds Bachelor’s and Masters degrees in Electronics Engineering from Agra University, and worked for 12 years with Emerson Process Management before joining Moore. For more information on the products referenced in this article, telephone: (818) 894-7111; e-mail:

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Thermography At Ford's Dearborn Stamping Plant

While Ford’s Dearborn Stamping Plant (DSP) had thermal cameras on site in the past, it had not met the objectives of a successful thermography program. Today, though, the plant’s thermography program is a model for the rest of Ford, and it came on line in just a matter of weeks.


Lately, there has been a stream of visitors to the Dearborn Stamping Plant (DSP) housed in the historic Ford Rouge Center in Dearborn,Michigan. What’s the attraction?

The DSP operation, which manufactures sub-assembly doors and hoods for the popular Ford F-150 pickup truck, achieved a perfect score in a recent independent audit of its weld effectiveness. That makes the plant best in company- perhaps the best period-when it comes to the precision with which it forms and welds sub-assemblies. Executives from Ford Motor Company corporate offices and management from other Ford operations want to know how they do it.

A significant factor is a condition-monitoring program using thermography or thermal imaging. Thermography itself isn’t new to DSP, or new to Ford operations, but the DSP thermography program is unique. After only 30 days, the program scored higher in an insurance audit than any other Ford thermography program had ever scored. It continued to work even when new thermography team members came on board.

In the best tradition of Ford’s commitment to continuous improvement, DSP’s thermography program keeps getting better. And possibly the most distinctive aspect: the program is designed around a systems approach and supported by a systems solution. That means it has the potential to be quickly and easily replicated and the possibility of being deployed with equal success and equivalent return on investment in any Ford operation.

The DSP operation
The Dearborn Stamping Plant occupies much of a twostory building that has supported the manufacture of Ford vehicles since the 1930s. The first floor houses various subassembly lines to support the large number of F-150 styles, where inner and outer door and hood panels are spotwelded together. Before that welding can happen, sheet steel must be formed into panels using four presses located on the second floor of the facility. This includes a new very efficient Schuler Extra Large “A” Transfer Press with five successive slides (presses).

Plant manager Frank Piazza approached process engineer Jim Jackson and asked him to develop a new thermographic process capable of meeting the requirements. The program had to be user-friendly, maintainable, replicable and, most importantly, reliable.

“When a press goes down, eventually door lines shut down,” Jackson explains. “Ford makes a total of about 135 F-150 trucks an hour at its three assembly plants. If we were to have a catastrophic failure,we only have three days before we shut down everything (all three plants).”

pdmstrategies2Jackson says that before the present thermography program was in place, DSP lost a press for five days.”It was not a pretty picture,” he recalls. “When the lines are down and not producing parts, the company is still incurring costs. Costs mount up fast.” Thermography had been used to assist in determining the root cause of the failure, but the plant’s original thermography program wasn’t doing the job adequately. Since the new thermography program began, there have been no such incidents at DSP. In other words, the new program works.

What’s different at DSP?
The success of the program Jim Jackson put together with the help of John Lafeber, a thermal imaging consultant and a manufacturer’s representative for IR cameras, has several unique elements:

  • Skilled crafts trained in thermography…
    At the heart of DSP’s thermal imaging program is Ford Motor Company’s decision early on to use skilled trades to do thermography.He says, “We chose to go with members of the skilled trades (electricians, weld fixture repair specialists, etc.),” says Jackson. “We realized that once these people were trained and certified in using an infrared camera, they would know what a thermal image meant and how it would impact the process.”
  • Autonomous maintenance…
    Jackson also believed that to be successful the thermography program needed to empower its thermographers to make decisions without the interference or second-guessing of anyone, including management. “I knew that they (the tradespeople) are the experts about how the equipment and processes function,” he says. “So, we did what everybody talks about. We empowered people, the experts, to make decisions about what needs to be fixed and when to do it.”The thermographers do the inspections,write the work orders, publish the reports and complete the follow-up to ensure that the concern is addressed and management is informed of the status of repairs and all associated issues.
  • A “lean” operation… A product of Ford’s overall commitment to continuous improvement and Jackson’s embracing of the thinking of James Womack [1], DSP’s thermography program is what Jackson calls a lean operation. This is one in which something is “done only once.” As a model, Jackson cites the way many modern retail operations do inventory: bar codes, databases and hand-held specialty computers with scanners-one step and no paper.

Immediate and ongoing successes
Everything came together for the new thermography program on a Monday morning two years ago. Rick Cox, an electrician, and Hassan Koussan, a specialist in weld-fixture repair,were equipped with IR cameras and Pocket PCs. That same day, Jackson received word that plant manager Piazza wanted to see “what he had paid all this money for” at a Wednesday morning meeting. This was just two days into Cox’s and Koussan’s training on using the new systems solution for the DSP thermography program.With the support of Jackson, their maintenance mentor (or in lean terms, their “Sensei”), Cox and Koussan went to the Wednesday meeting and reported on their results. Piazza liked what he saw and heard.

Cox’s initial thermographic responsibilities were to be on the press floor. Koussan, the weldfixture expert, was assigned the welding lines in the assembly area. At the end of the first month of the program, there was an insurance audit. DSP’s thermography program scored higher than any other Ford thermography program had ever scored–and that was only the beginning. Over the first two years of the program, there have been many audits: insurance, ISO and thirdparty weld audits. The results have always been the same: the best thermography program the auditors have ever seen.

The weld audits are semi-annual events at DSP. An independent auditor performs them in order to verify the integrity of the welds performed on door and hood sub-assemblies. In part, these audits are intended to confirm that DSP is meeting Federal Motor Vehicle Safety Standards for welding, including confirmation that safety-critical welds, called delta welds, are sound.

In March 2006, an independent audit performed on welds at DSP concluded: “The Dearborn Stamping Plant weld quality percentage is outstanding. The overall weld effectiveness is 100 percent. The group effectiveness is 100 percent.”

William “Bill”Bushey, weld engineer at DSP, agrees that the audit exceeded expectation: “We went from 98% weld quality, which was the best that we could hope for, to 100% weld quality.We were perfect in the audit, and now we are challenged to maintain that level of performance.”

Bruce Dudley, DSP’s manager of engineering, puts the accomplishment into perspective. He points out that if one did an SPC (statistical process control) analysis based on the number of doors produced at DSP, even by the very best worldclass standards for quality, a small percentage of defects could be expected. Still, the audit at DSP found 100% weld effectiveness.pdmstrategies4

In other words, the welding operations at DSP got a perfect score, which means that F-150 door and hoods are safe and of superior quality.Managers interviewed for this article attribute these audit results to the thermography program.

How DSP does thermography
In addition to thermal cameras, the DSP thermal imaging system also has an IR reporting database program (Lean DB from Thermal Trend) that lists on desktop PCs each piece of equipment that thermographers visit on inspection routes. The equipment is listed in the order it is thermally scanned. In addition, the same routes are loaded into each thermographer’s Pocket PC, which he carries with him during inspections, constantly updating the database for downloading into his desktop computer back in his office. When practical, each piece of equipment on an inspection route is bar coded, and the bar code is scanned into the Pocket PC at each inspection. This move ensures that the data collected is assigned to the correct asset (piece of equipment) and that no asset scheduled for inspection is missed. The equipment bar codes for the thermography program are linked to the plant’s maintenance management system to aid in tracking the work order process/concerns for each piece of equipment/system that is inspected.

Some experts speculate that thermographers typically spend more than 25% of their time doing reports. In the DSP system, reports are essentially complete when a thermographer enters data into his Pocket PC on the plant floor. The data is entered into screens formatted just the way a thermographer needs them to be organized. In fact, the software actually prompts the thermographer to enter the data needed (temperatures, loads, etc.) at each inspection site. The data is downloaded into a relational database, where it can be used for any purpose (e.g., reports) by anyone with access to the network.

The process just described supports the philosophy of a lean operation. Data gathered on the plant floor goes directly into the system without further manipulation. Therefore, documentation is always 100% up-to-date-and paperless. Plans are already in the works to make the system wireless, eliminating the transfer of data “manually” back in the office.

0107_pdmstrategies_img5Quick training, easy transitions
Because DSP’s thermographers are skilled tradespeople first and thermographers second, their training took only 30 intense days. The fact that the relational database they use is self-guiding and intuitive helped to speed up the process.

Since the initial launch of this thermography program, Cox has moved on to be the plant configuration administrator responsible for keeping track of all the equipment in the plant. His replacement is Chuck Larabell. Like Cox, Larabell is an electrician, and he, too, went from electrician to productive thermographer in about 30 days. >Larabell was able to assume Cox’s routes and database and learn the thermography requirements, and has since made his own route additions to the original. Koussan is now the sensei/mentor for a skilled trade person from another Ford plant implementing a lean thermography program, and all concerned expect the transition to take about a month, too.

Jackson speculates that any operation willing to invest in thermal imagers, Pocket PCs, the required software and intense 30- day training for industrious, skilled craftspeople can create a successful, lean thermography program if it is willing to support and empower the thermographers to do the job.”It takes a systems approach,” he asserts. “We bought and implemented a system solution.”

Day-to-day thermography at DSP
The two main areas of the plant where thermography is performed are the press floor and the assembly floor. Respectively, Larabell and Koussan do thermography there, Monday through Friday.While the thermographers themselves may immediately fix a problem they discover,more often than not, the repairs are done on a third shift set aside for maintenance, or on weekends when there usually is limited or no production.

When the thermal camera reveals a problem on an inspection route, Larabell and Koussan save an image so they can include it in an e-mail report when they return to their office. The report, sent to a list of recipients that includes everyone from plant manager Piazza to operators on the plant floor, also incorporates a work order number to ensure that there is correlation between the report and any necessary repairs done as a result. After the repairs are made, the thermographers receive a report to that effect, which alerts them to go back and verify that the repair was done effectively.

0107_pdmstrategies_img6At DSP, reports are tools. They are sent on a daily basis to the teams responsible for repairing equipment. A simple report is also printed weekly for management. It shows problems that have been found and problems that have been resolved. Other special reports are printed when needed. To gain a better understanding of DSP’s thermography program, look briefly at how Larabell and Koussan do their work:

On the press floor… Larabell, who does thermography full time throughout the plant, monitors the four presses on two-week intervals. He concentrates on the electrical panels, each of which has an identifying bar code on the outside and scores of electrical contacts and components inside. He also scans motors, valves and other components, looking for problems and potential problems.

The database in Larabell’s Pocket PC includes the normal running conditions for the equipment scanned. As a result, he can compare current operational values to “what ought to be.” Furthermore, since every panel and every piece of equipment (where practical) has a bar code, it gets checked off in the database following scanning.

0107_pdmstrategies_img7On the assembly floor… Koussan monitors approximately 500 welding guns in the assembly area plus the related electrical panels and other equipment. These responsibilities translate into 1,500 pieces of equipment in his PC’s database. “Weld guns get checked at least once a month,” he says. “For other equipment, we know the history of incidents on each piece, and we set our inspection frequencies based on that.”

Koussan typically comes to work four hours before the end of the first (midnight) shift. This allows him to monitor the weld guns when they are not in production, using instruments other than a thermal imager to collect and trend data that may help him detect potential problems.

Once production starts on the first shift, Koussan begins using thermography on the assembly processes. He “shoots”weld guns, transformers, shunts and their cabling, weld control panels and electrical panels.

Since he comes in at the end of the maintenance shift,Koussan is in a position to ask if problems from the previous day have been fixed. An affirmative answer sends him to the location with his IR camera to confirm that repairs have been made successfully. “Ninety-nine percent of the time a problem is fixed that night,” he says. “If it’s not, then we have to follow up on it the next day.”

Looking ahead
Everyone at DSP interviewed for this article expressed a commitment to continuous improvement in all aspects of DSP’s operations, including the thermography program. One tool for achieving continuous improvement in thermography is a bi-monthly meeting of the thermography team, which includes Lafeber.

“We hold these meetings to help us understand the program and how to improve it,” Lafeber acknowledges. He further explains that suggestions for improvements often result from thinking about thermography as a lean process. Going wireless, for example,will eliminate a step in the reporting process.

One significant initiative that has come out of “thinking lean” is a proposal to rewrite the database software to support a continuous flow model rather than the traditional batch and queue way of operating. “In the past, thermography has been done on a batch and queue basis-a batch of inspections, then a batch of reports, then a batch of repairs,” Lafeber notes. “What is more efficient is to do continuous flow thermography. The original database used by DSP was designed for batch and queue. DSP has written specifications for new software to take better advantage of continuous flow thermography. It will reduce the time from problem detection to repair and make better use of the data in the database.”

Wayne Little, facilities supervisor at DSP, is quick to point out that compared to other Ford stamping plants, DSP’s yields are higher, even though the plant runs fewer presses for fewer hours. In fact, the yields are up–even on the presses that were there before DSP installed the relatively new five-slide Schuler. MT

John Pratten III is an ANST compliant, Level II trained thermographer with Fluke. He conducts training for customers, including a number of Fortune 500 companies, in the fundamentals of IR and how to set up quality PdM programs.He also performs thermography work and training in building science, including working closely with and training various state agencies involved with projects to improve the quality of low-income housing. E-mail:


1. James P. Womack is the founder and chairman of the Lean Enterprise Institute, a nonprofit educational and research organization chartered in 1997, to advance a set of ideas known as lean production and lean thinking.

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Problem Solvers: Conveyor Idler Bearing Isolator Increases Reliability And Safety While Reducing Downtime And Power Consumption

probsolvers_inproInpro/Seal Company has announced its new Belt Conveyor Idler/Roller Bearing Isolator. The result of direct customer request, input and feedback, continuous R&D and extensive field testing and trials, this product was designed to increase productivity, save energy and increase safety in coal mining, ore mining, aggregate and related applications that use belt conveyors in bulk material handling applications. Before the advent of the Belt Conveyor Idler/Roller Bearing Isolator, users had to deal with outdated sealing methods, in particular elastomeric seals to protect idler bearings. Small, spring loaded, contact seal, elastomerics are tiny plastic devices that make contact and rub on the exterior of the idler roll while operating. Elastomeric seals are widely used because they are cheap and because there has not been anything better available–until now. As a contact seal is prone to failure and needs constant maintenance, the entire bearing protection system is somewhat precarious. And when an elastomeric seal quits working, undesirable things happen, much of it without warning.

A huge industry problem
Belt conveyors are in service, around the world, working 24/7 to “trough the belt” or transport bulk materials in coal mining, ore mining, aggregate, hard quarry and related applications including; concrete, asphalt, fertilizer, salt, recycling, wood, pulp and paper, electric utility, grain, construction, agricultural, steel and general industrial. These belts are typically supported by three conveyor rollers, or idlers, positioned at intervals as close as three linear feet. One roller is horizontal and other two are positioned on either side, at an angle necessary to carry the burden. Depending on the specific application, they operate above and under ground and may extend for many miles over mountainous terrain, roads and streams. There may be as many as 10,500 bearings and bearing protection devices on the conveyor rollers per mile of run. In the mining industry, it’s estimated that each site has 3-4 miles of conveyor with idlers strung out the entire length of the belt.

When an idler fails, it is most likely the result of bearing damage caused by contaminants (dust or moisture) entering the bearing environment. Chances are the plastic has failed by wearing out and has grooved the shaft or has burned to a crisp at the point of contact. Once an elastomeric seal fails, contaminants are drawn into the housing, where they condense and contaminate the lubricant and cause the bearings to fail. The end result is a seized roll, belt damage or worse. The idler can burst open, and if it does, metal-on-metal contact can cause a fire. To counter this, most mining operations employ greasers that work around the clock trying to keep idler bearings lubricated in an effort to make contact seals work. But, because lip seals carry a 100% failure rate, eventually users will have to deal with catastrophic belt failure no matter what they try.

A welcome solution Inpro’s Belt Conveyor Idler Roller Bearing Isolator is custom-engineered to suit individual applications. It is easy to install because it conforms to existing clearances, housings and bearing patterns. It can be retrofitted to any existing manufacturer’s top side and return frame assemblies in any belt width or troughing angle for any brand of conveyor. It is available in any idler configuration, including: CEMA B, C,D and will fit any idler type including: transition, impact, troughed, training, return belt, flat carrier, impact, rubber cushion return, self aligning, self aligning return, offset center roll, picking and feeding, unequal length troughed, wire rope, wire rope return, low profile, “V” return idler, variable trough, rubber disc, ceramic, two, three and five roll garland, live shaft and side guide conveyor idlers.

Inpro/Seal Company
Rock Island, IL

Split Shaft Seals Curb V.O.C. Emissions

probsolvers_woodexOriginal MECO® custom shaft seals, made byWoodex Bearing, have proven effective in containing V.O.C. (volatile organic compound) vapors from rotating reactor, dryer, extractor and conveyor driveshafts, resulting in local solvent concentrations of 100 PPM and less. MECO’s patented seal designs are custom-engineered to accommodate diametric shaft run-out of 6mm and more and still hold vacuum. Some models can operate at high temperature without a purge or flush line. Fully split models can be installed on existing machinery with minimal downtime. Seal performance is reliable, with long run-time between rebuilds, even in applications with bent or misaligned shafts. Seal maintenance can be predicted far in advance. These seals are used on rotating equipment in the dry powder and bulk processing industries. FDA-approved materials are available.

Woodex Bearing Co., Inc.
Georgetown, ME


Repairs, Rebuilds, Upgrades

probsolvers3The Stock Perpetual Motion after-sales customer care program can keep you running during critical load periods.New in the U.S., it comes standard with the purchase of a Stock Bulk Material Handling (BMH) product, offering 24/7 support for companies wishing to improve existing plant performance without complete equipment renewal. It starts with a technical assessment of your bulk handling equipment and subsequent status report. From this evaluation, Stock can suggest upgrade or rebuild recommendations that improve performance and enhance operation without the high cost or lost time associated with complete replacements. Stock can perform the recommended service and provide guarantees on reliability and performance. Perpetual Motion can be built into a tailored contract package, allowing customers to upgrade a plant at scheduled intervals and as part of an ongoing program of services. These services typically include routine maintenance, lubrication and call out. Stock

Schenck Process Group
Cleves, OH


Meet All Sanitary Regs With This Easily Serviced/Cleaned Powder & Bulk Conveyor

probsolvers_4Hapman has added the Series 600 (6” diameter) Helix™ flexible screw conveyor to its versatile Hi/LO tilting base conveyor line. This original Hapman-design has the added option of a ribbon-style agitator to assure proper size reduction of material for consistent conveying. As with other Hapman Hi/Lo units, it can be quickly moved from location to location and is easily serviced/cleaned in its lowered position. Finishes can be standard industrial, food grade, or 3A Dairy.Hapman is currently the only manufacturer of flexible screw conveyors that is able to provide this USDA Equipment Acceptance Certificate.

Kalamazoo, MI


“No-More-Lube” Chain Technology

probsolvers5According to its manufacturer, the Renold Syno line sets a new benchmark for chain performance with little or no lubrication. Covering both small and large pitch sizes, this technology has been tailored into three different products that carry the Syno name. They include:

  • Nickel-Plated for hygiene-sensitive applications where lubricant contamination must be avoided
  • Stainless Steel as an option when the application requires enhanced levels of corrosion resistance
  • Polymer Bush to tackle serious wear and fatigue associated with higher-load, heavier-duty jobs

Renold Jeffrey
Morristown, TN


“Like-New” Separators With Genuine Parts And Rebuild Services

probsolvers6Operators of ROTEX® separation equipment can enjoy peace of mind in knowing that maintenance of this equipment is fully supported by the ROTEX Parts & Service group.Whether customers need a single replacement part or an entire refurbishing and upgrade of any ROTEX machine to like-new condition, the Parts & Service group has the expertise and the inventory to keep the company’s customers covered. According to a ROTEX spokesman, the company will completely disassemble the screener, replace worn parts and install the latest technology so that when it sends a piece of equipment back to the customer, it’s the same as a new machine. That’s quite a cost-efficient alternative to purchasing new equipment. The ROTEX line of innovative separation equipment includes Gyratory and Vibratory Screeners and Sifters for Dry Applications, Liquid-Solid Separators for Wet Applications, Automated Particle Size Analyzers and Vibratory Feeders and Conveyors.


Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Beyond Milestones

bill_kieselWith this January 2007 issue, MAINTENANCE TECHNOLOGY reaches an important milestone: completion of two decades of publication as the premier magazine for the plant equipment reliability, maintenance and asset management community. One of the key reasons for our success during these 20 years has been industry’s growing recognition that it is the maintenance function and YOU, the hardworking professionals involved in it, that keep plants and facilities across all market segments up and running.

Unlike other publications in the industry, MAINTENANCE TECHNOLOGY is dedicated to serving that large- but very select-audience of managers and supervisors who are responsible for ensuring the reliability and availability of their organizations’ systems 24/7.We are the leader in this very important market because we are the ONLY publication focused 100% on this industry and its never-ending quest for world-class maintenance status.

January, as the cliché goes, is a time of reflection and of renewed commitments for improvement.Around here, though, we don’t stop with January. Throughout the coming year, we will continue to reflect on what we have been doing for you over the past 20 years, and, more importantly, how we can be serving you better in the future.

MAINTENANCE TECHNOLOGY is your publication. From the beginning, it was designed to help you successfully address the many challenges that complex industrial environments throw at you on a daily basis. And, just as you are held responsible for the efficient running of your plants and facilities, we expect you to hold us responsible for delivering the type of information that helps you to do your jobs better…faster…more cost-effectively…

Year after year, MAINTENANCE TECHNOLOGY has strived to be far more than just another trade journal that lands on your desk each month. We have sought to be valuable partners with you and your company, helping you to wade through and understand the countless technologies and strategies- both available and emerging-that can help make your job easier and your operations more reliable and profitable. Our mission hasn’t changed over the years…but,we like to think that we’ve grown far stronger in our pursuit of it.

Of course, we couldn’t be where we are today without YOU, our loyal readers and advertising partners. Thank you so much for your past support.We look forward to working with you over the next 20 years!

Happy New Year! MT


Continue Reading →


6:00 am
January 1, 2007
Print Friendly

The Most Productive Nation


Bob Williamson, Contributing Editor

What should we wish for in 2007? Cutting operating costs has been at the top of the business and industry wishlist for over 30 years…

Sometimes the cost-cutting bell gets rung louder than others. It all depends, some say, on Wall Street investors, stockholders, executive decisions, the marketplace, competition, return on investment, global economic changes and/or currency exchange rates. Then, in prosperous times, the cost-cutting bell is silenced. Should we wish for more of the same?

The United States remains the most productive nation in the world, and U.S. manufacturing has remained the most productive in the world since before 1960! Despite what the media says, despite politicians’ interpretations, despite what some may think, we are a model of economic stamina, whether measured by Real GDP (Gross Domestic Product) per capita, or Real GDP per employed person. The top 10 Real GDP per capita in 2005: U.S., Norway, Denmark, Netherlands, Canada, Austria, U.K., Belgium, Sweden, Australia. Manufacturing, not service industries, is one of the sources of ‘original wealth” (along with mining and agriculture). Should we wish to remain the most productive nation in the world? If so, we have serious work to do…and we already know how to do it!

Good news continues to be reflected in this year’s productivity trends: U.S. manufacturing Unit Labor Costs (ULC) fell 8.3% in the second quarter and 4.1% in the third quarter of 2006 (ULC = average labor compensation per unit of output). Productivity improvement measures, including advanced manufacturing methods, workplace innovation, favorable currency exchange rates, and (I believe) our maintenance and reliability improvements continue to sustain America’s competitive edge.

Low-wage countries continue to attract the attention of some manufacturers. However, these countries (China, India, Mexico, Turkey, Czech Republic, Hungary and Poland) also have extremely low productivity levels. This is where the Unit Labor Cost comes in-a true measure of economic productivity. For example, wages are considerably lower in China and India (only 2% to 3% of U.S. wages). But productivity is also significantly lower in China and India (12% to 13% of U.S. productivity). That means considerably MORE labor hours are required to make the same output in China and India than in the U.S. Still, China’s and India’s Unit Labor Costs are lower than those of the U.S.-but only 20% lower, on average. And, 20% isn’t that much when you calculate the true ‘costs” of importing goods from Asia. These include actual transportation, in-transit damage, un-returnable defective products, long lead times for changes and order quantities and high inventory levels that have to be maintained here, not to mention the risk of dealing with a country (China) that doesn’t recognize proprietary information, patents, trademarks or copyright protections.

China and India, among others, will continue to be formidable consumers and competitors in the global market. Twenty-eight percent (28%) of all of the world’s jobs are in China and 15% are in India. As their standards of living increase, so will their cost of living and their employee compensation. In China, for example, average hourly compensation in manufacturing jobs rose 8.8% from 2002 to 2003, and another 8.1% from 2003 to 2004. To retain their lower ULC, China and India must employ increasingly more advanced manufacturing technologies, methods and innovations along with their economic and environmental reform policies.Advanced manufacturing requires increasing levels of skilled and highly-skilled workers and technicians, which also brings higher compensation levels.As noted in previous columns and articles, developing and attracting higher-skilled workers will continue to be an escalating worldwide problem.

Our challenge for 2007 and beyond is to keep our productivity levels high and our operating costs down as we enter a 19-year era of drastic workforce demographic changes. We must dramatically improve the education levels of our workforce to facilitate error-free operations plus accelerate our ability to rapidly innovate and improve our infrastructure, facilities, manufacturing, transportation and utilities. Our business and government leaders, schools and families all play a role in retaining, and improving our competitive advantage. Look what’s happened over the past 30 years: Vocational/technical school programs have declined, as have skilled trades apprenticeship programs. Many manufacturing and maintenance jobs have lost their luster, despite relatively high wages. Changes in taxes, insurance, health care, permits and liability litigation have increased costs. The cost of procuring and transporting raw materials and finished goods has skyrocketed. Outsourcing and off-shoring, once thought to be “the answers” to our industrial woes, may not always be the best path to a long-term, viable economy. These strategies often just turn out to be “quick fixes” with long-term consequences.

My wish for 2007? Let’s all do our part to improve our Nation’s success by building a solid foundation based on an educated, motivated, innovative workforce. Let’s make our critical equipment, infrastructure and facilities the most reliable and best-maintained and our standard of living and productivity the highest in the world. Here’s wishing all of our faithful readers a very happy and prosperous New Year!

AUTHOR’S NOTE: The facts and statistics for this article were obtained from The Conference Board Report (October 2006); The Conference Board via Newswire (June 01, 2004); USDOL, Bureau of Labor Statistics News (Nov. 30, 2006 & Dec. 5, 2006); and the USDOL, BLS, Office of Productivity & Technology report: “Comparative Real GDP Per Capita and Per Person Fifteen Countries 1960-2005.”

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Asset Intelligence Goes Beyond Basic Condition Monitoring

With new and increasingly more powerful on-line equipment diagnostic tools becoming available every year, process manufacturing industries now have the opportunity to integrate this criticalequipment condition information into their asset management strategies. These strategies can support more business- driven approaches aimed at improving overall financial performance. Much work still needs to be done, however.

Until now (in process manufacturing operations at least…), the focus has been on relatively limited and specific diagnostic monitoring of intelligent field devices and large rotating equipment. This is due largely to the widespread availability of highly capable, fieldbus-enabled condition monitoring tools, such as vibration, temperature and pressure monitoring and fluid analysis, all of which can be integrated into the control system strategy to react to critical changes in the readings.

But, within an overall asset management strategy, it’s important that real-time condition monitoring practices go beyond intelligent field devices and large rotating equipment to encompass all plant production assets. These should include all sensors and actuators (regardless of the vendor); rotating and non-rotating equipment, such as pumps, motors, compressors, turbines, mixers, dryers and heat exchangers; even entire process units.

The real goal is to move to predictive and proactive decision-making based on developing trends versus our current reactionary approach. This means that large (and often overwhelming) amounts of real-time diagnostic data now available must be collected, aggregated and analyzed, then put into proper context and made available to other plant and enterprise systems. In addition, we need to manage and control the resulting actions to manage risk and support our continuous improvement efforts, bringing together Maintenance, Operations and Engineering. By pulling these three aspects together—collection, analysis and action—we move from condition monitoring to “condition management” based on real-time asset intelligence.

The key lies in developing a knowledge management capability that captures the expertise of today’s highly experienced operators, engineers and maintenance technicians. While this capability is important today, it will become even more critical in the future as our industrial plants struggle to maintain current levels of asset utilization and availability with an ever-shrinking pool of skilled and knowledgeable personnel due to an aging workforce and retirement of many of our most experienced people.

By combining this knowledge with an integrated view of the entire operation from both the business and operations perspectives, we can move to an environment where more informed decisions can be made in a more timely fashion. From this base,we will be well-positioned to manage the risks inherent in the process industries (i.e., health and safety, regulatory, financial and environmental) while delivering improved business performance and shareholder value. MT

Continue Reading →


6:00 am
January 1, 2007
Print Friendly

The Maintenance/Production Partnership: Part II


Ken Bannister, Contributing Editor

Role definition is crucial if both Maintenance and Production departments are to strike an accord and work in an autonomous, yet cohesive manner to deliver a high-quality product in a waste-free, cost-effective manner. Virtually every major management philosophy and methodology in practice today recognizes and fosters the integral relationship between the Maintenance and Production departments. Zero inventories-based Just In Time (JIT) and Lean-manufacturing methods would not be possible without high levels of equipment reliability and availability, driven by active operator involvement in the maintenance process.

Autonomous operator-based maintenance is foundational to the Total Productive Maintenance (TPM) philosophy, and is a cornerstone of the Reliability Centered Maintenance (RCM) methodology, both of which heavily utilize operator input to design, implement and continuously improve equipment maintenance reliability strategies. Increasing reliability and throughput requires Maintenance and Production to work together on a two-pronged management and hourly workforce level.

Operator-based maintenance
Operator-based maintenance can be implemented through the following three-step approach designed to promote confidence in both parties:

Step 1: Commence with a revised work acceptance procedure.Whenever Production calls in a machine problem, guide the caller(s) to disclose their name, the machine #/description, location, area of the problem (component or system) and a primary sense STILL (Smell, Touch, Intuition, Look, Listen) analysis of what the problem is believed to be.Operators instinctively know when their equipment is not running in the “sweet spot,” but they are rarely asked for their opinion(s). This step simplifies and speeds up the pre-planning process and allows the scheduler to more accurately dispatch the correct resources the first time.

Step 2: Allow and encourage operators to be part of the testing, start-up and acceptance after repair completion.

Step 3: Introduce Reliability Centered Maintenance (RCM). Choose a suitable RCM pilot and always include the relevant equipment operator and supervisor as part of the RCM analysis team when performing the FMEA analysis and condition-based maintenance work tasks. Use a perimeter-based maintenance approach in which the equipment is set up for rudimentary preventive and condition

monitoring checks while running. These checks can include temperature, flow, throughput, fill level, pressure and filter cleanliness-set up in an interactive “Go/No Go” style that lends itself perfectly to a regular operator check. This type of “Go/No Go” check only requires paperwork in the form of a work request when a “No Go” state is in effect.

Take, for example, a pre-RCM PM work order that might have instructed a maintainer to check and record all gauge pressures. This would not just be a waste of maintenance resources-the maintainer also would have to know the upper and lower safe operating window (SOW) limit for every gauge if a situation were to be immediately averted.

Recording every good pressure in the CMMS history also is meaningless and a waste of resources when it comes to input of the data. Marking each gauge with the SOW allows any person viewing the instrument to tell if the needle is in the safe or “Go” position between the lines, in which case no further action is required or taken. If, however, the needle is outside the SOW mark lines, or in a “No Go” state, the operator contacts the supervisor who immediately raises a work request for Maintenance to attend the pending situation. Because of the RCM FMEA analysis, Maintenance knows right away what the problem root cause could be and activates a planned work order in response to the event condition

RCM, which advocates autonomous maintenance work by operators (Total Productive Maintenance – TPM), is a perfect catalyst in building and cementing autonomous operator maintenance as a first-level maintenance approach, wherein the operator becomes the true machine guardian on a daily basis. Once a comfortable maintainer/operator working relationship is established, more complex PM-styled tasks, such as lubrication and filter changeouts, can be engineered into the operator-based maintenance program. In Fig. 1, operator-based maintenance is shown dovetailing into the core element of the maintenance process.


Maintenance/production management alignment
Aligning the Maintenance and Production management teams to work in partnership is achieved through communication and an understanding of each other’s goals and objectives. In the process, the parties work collaboratively in the planning and scheduling of the production equipment uptime and downtime activities.

As both departments own the equipment in different ways, both compete for “alone” time with the equipment. Unfortunately, if both agendas are not harmonized, the equipment will suffer and both departments will lose.

The interactive input/output information required of both departments in order to prepare and schedule weekly forecasts and daily work schedules effectively is depicted in Fig. 2. In both cases, monthly and weekly schedule forecasts are being built on an ongoing basis, and being used as “best guesstimates” for assessing and managing resource requirements. From these forecasts come the daily schedules that are usually 70% to 95% accurate–and which should be just flexible enough to allow for minor unforeseen changes. To synchronize these daily schedules, both Maintenance and Production must agree, through the RCM process, what point in an asset’s condition dictates an uncontested responsive event in which both the Maintenance and Production planning and scheduling departments will work together in the asset’s interest alone.


The Maintenance department can further assist the Production staff by providing a series of documents that include: a daily equipment condition report spelling out any triggered alarm conditions and found “No Go” exceptions that require planning and scheduling; a status report of unfinished or “carryover” work from a previous day or shift; a report-driven form with the fault codes marked on the work orders to show the percentage of non-maintenance-caused equipment failures (i.e., operator error, loading errors or jamming, overloading, etc.); and an equipment availability report. The Production department can further assist the Maintenance staff through the provision of a report detailing any pending product changeover or retooling event from which Maintenance can take the forced downtime opportunity to plan and schedule backlog or pending work on that equipment. Production will also assist Maintenance by providing reports on raw material problems, equipment incidents and any work requests. Getting together on a daily basis allows the information transfer and the setting of an almost fixed daily schedule. The product of this is equipment reliability and availability that translates directly into sustainable throughput and quality!

Ken Bannister is lead partner & principal consultant for Engtech Industries, Inc. Telephone: (519) 469-9173; e-mail:


Continue Reading →


6:00 am
January 1, 2007
Print Friendly

Reducing Hot-Spot Temperatures in Transformers

In this real-world study from the power gen sector, researchers tested external oil coolers and ultra pure mineral oil to determine their effectiveness on hot spots, and, ultimately, equipment reliability

Over the past several years, Consumers Energy (“Consumers”) has come to rely strongly on external oil coolers to delay scheduled transformer capacity increases, or to cool transformers that experience marginal high top-oil temperatures. A transformer experiencing a top-oil temperature of 90 to 100 C or more would be a likely candidate for such an installation. These types of external coolers are installed in close proximity to the transformer using flexible hoses that are typically connected to existing 11/2″ taps near the top and bottom of the transformer.

Now that Consumers has acquired more than 20 oil coolers, questions frequently are being asked regarding the effectiveness of these units in actually limiting the loss of insulation life. Although the cooler reduces the oil temperature, there is a concern that it may be disrupting the natural convective oil flow inside the transformer and the hot-spot cooling effect may not be as great as expected or indicated by the top oil temperature.

Under normal conditions, the temperature gradient between the top and bottom of a transformer produces an internal oil circulation that acts to remove heat from the coils through convection. An external cooler can diminish this normal temperature gradient, resulting in reduced convective currents and, in theory, create pockets of stagnant oil and induce local overheating. To avoid this situation, some utilities have reportedly removed OEM-installed oil pumps from transformers where there has been no internally directed oil flow.

Equipment description
Study One…
transformerThe transformer selected for Study One was a unit being rewound for Consumers by Siemens Westinghouse of Hamilton, Ontario. This 5/6.25 MVA circular-core unit was originally manufactured by Allis Chalmers in 1952. Design changes by Siemens Westinghouse increased the OA rating to 6 MVA and the FA rating to 7.5 MVA. Six Luxtron fiber optic sensors were implanted near the top of the transformer’s secondary coils—two in each winding with one located between the first and second disk and one between the second and third disk. The sensors were installed as near to the mid-point of the disks as feasible and in contact with the copper conductor. These locations are thought to closely represent the transformer’s hot-spot location. All other temperatures recorded in this study were taken from standard thermocouples.

A 50 kW external oil cooler was obtained from Unifin of London, Ontario. This cooling unit consists of a 1 HP Cardinal pump, two 4.0 HP fans and a heat exchanger. The pump used by Unifin is designed for a variety of applications,with the desired oil flow for a given application achieved by throttling the flow with a valve on the discharge side of the pump.Nominally, this combination of components is rated by Unifin for a flow rate of 20 GPM, but the pump can produce a much higher flow, as was observed in this study.

Study Two…
The transformer selected for Study Two was a unit being rewound for Consumers by Ohio Transformer of Tallmadge, Ohio. This 5 MVA base circular-core transformer was originally manufactured by GE in 1963.

Six FISO fiber optic sensors, two per phase, were implanted in the coils of the transformer and a FISO Nortech-6 monitor was installed to record the readings. The hotspot locations were determined by the design team at Ohio Transformer, and the sensors were installed during the rewind process. All other temperatures recorded in this study were taken from standard thermocouples.

A 100 kW external oil cooler was obtained from SD Myers. This cooling unit consists of a 3 HP pump, 5.0 HP fans and a heat exchanger. The cooler is mounted on a portable trailer and includes hoses configured with check valves and quick connect fittings. The desired oil flow is achieved by throttling the flow with a valve on the discharge side of the pump.Nominally, this combination of components is rated by SD Myers for a flow rate of 50 GPM, with a capability of removing 340,000 BTU/hr.

An industry standard mineral oil and an ultra pure mineral oil manufactured by Petro-Canada with the trade name of Luminol were obtained from Ohio transformer. The transformer was first filled with standard mineral oil, tested, drained, refilled with Luminol, and then retested to obtain the efficiency comparison between the insulating oils used in combination with and without the external auxiliary oil cooler.

Study conditions and results


Study One…
Heat runs were initially conducted on the Allis Chalmers transformer (which had undergone design changes and was being rewound by Siemens Westinghouse) at the OA and FA ratings and then at 150% of the FA rating, or 11.25 MVA.While still at the 11.25 MVA level, the oil cooler was connected and temperatures were recorded until temperature stabilization was achieved. The cooler’s oil flow rate maintained for the initial run was 45 GPM. The observed temperature differential between the cooler’s inlet and outlet was consistently about 10 C degrees.

One of the fiber optic sensors stopped working early in the first heat run. The instrument displaying the fiber optic temperatures is capable of displaying four readings at a time. The temperatures recorded were taken one each from the outside windings and two from the center phase winding.

The warmest hot-spot temperature recorded while loaded to 11.25 MVA, and without the cooler operational, was 112 C on the center phase winding.When temperature stabilization was reached after the cooler was operational, this temperature had been reduced to 100 C. The magnitude of this temperature reduction was fairly consistent across all the sensors.

At the end of the first heat run with the cooler connected, the pump flow rate was increased to its maximum (estimated to be about 60 to 65 GPM) for one hour.No appreciable change was noted in the hotspot temperatures as a result of this, although there was a reduction of two degrees in the top-oil and average-oil rise temperatures. Had the test continued at this higher flow rate for a longer period, it is expected that the hot-spot temperature would have registered a similar decline.

The flow rate was then reduced to 20 GPM for a four-hour period. This resulted in an increase in the hot spot temperatures of approximately 4 C degrees.

0107_equipmentdesign_img4Study Two…
Heat runs were conducted on the GE transformer (that was being rewound by Ohio Transformer) at the OA and FA ratings and then at 150% of the FA rating, or 10.5 MVA, initially with the transformer filled with standard industry mineral oil and then repeated after draining the oil and re-filling with Luminol.While at the 10.5 MVA level and after the temperature stabilized, the oil cooler was connected and temperatures were recorded until they stabilized again. The cooler’s oil flow rate maintained for this study was 24 GPM.

The average hot-spot temperature recorded while loaded to the FA rating of 7 MVA, and without the cooler operational, was 92 C, using standard oil, and 87 C, using Luminol after stabilizing.When
temperature stabilization was reached after the cooler was operational, this temperature was reduced to 83 C, using standard oil, and 80 C, using Luminol. The magnitude of this temperature reduction was fairly consistent across all the sensors. The observed temperature differential between the cooler’s inlet and outlet varied between 8 and 14 C degrees, using standard oil, and between 11 and 18 C degrees, using Luminol.

The load was increased to the 10.5 MVA level, the oil cooler was connected, and temperatures were recorded until temperature stabilization was achieved. At this point, it was observed that the average hot-spot temperature of 140 C, in both cases, had been reduced to 127 C, using standard oil, and 115 C, using Luminol. The magnitude of this temperature reduction was fairly consistent across all the sensors. The observed temperature differential between the cooler’s inlet and outlet varied between 12 and 15 C degrees, using standard oil, and between 21 and 28 C degrees, using Luminol. (See Tables I & II and Figs. 2, 3, 4, 5, 6, 7.)







This study substantiates the benefit of employing an external oil cooler and the added benefit of using an ultra pure mineral oil (Luminol) in reducing a transformer’s hot spot temperature, thus preserving the life of the unit’s paper insulation. The relatively large internal oil quantities and large heatexchange surfaces of the transformers in this study result in relatively low internal oil and hot-spot temperatures.

Conversely, for a more modern unit with higher design temperatures, the expected temperature reduction with an external oil cooler could be even more impressive. However, the possibility of disrupted internal convection currents or diversion of oil from the transformers’ own radiators also would seem to be more likely because of the characteristically lower internal oil volumes. Consequently, a lower oil flow rate in the external cooler might be needed to avoid disrupting the transformer’s normal internal cooling pattern.

The transformer in Study One contained 1,920 gallons of oil, or 0.32 gallons per OA rated kVA, and the transformer in Study Two contained 1,300 gallons of oil, or 0.26 gallons per OA rated kVA. In a spot check of six transformers recently purchased by Consumers Energy, the lowest amount of oil found was 0.205 gallons per OA kVA rating. The SD Myers transformer maintenance guide reported in 1981 that some transformers had as little as 0.02 gallons per kVA.

In light of the significant variations in transformer oil volumes, flow to the external cooler may need to be tailored for the particular transformer involved. Besides possibly needing to modify the internal oil-cooling pattern, there also is a concern for creation of a vortex at the top hose connection. This would lead to air being sucked in and air bubbles being injected into the bottom of the transformer. A minimal oil level above the top hose connection must be maintained to avoid this or other possible measures must be adopted. MT

Noel Staszewski is a senior engineer in the Network Services Department of Consumers Energy.He has over 25 years of engineering experience in asset management and equipment maintenance in the utility industry, combined with additional experience in technology and product development, evaluation, reliability engineering and failure analysis of electronic components and systems in the automotive and computer industries. Telephone: (810) 760-3237; E-mail:

Mike Walker, a registered Professional Engineer in Michigan, spent 33 years in a number of engineering positions with Consumers prior to retiring in 2003. Since then, he has worked as an independent contractor for various companies. E-mail:

Continue Reading →