Author Archive | Kathy

260

7:39 pm
May 5, 2009
Print Friendly

Part I… Building Cultures Of Reliability-In-Action

Development of effective decision-making skills and behaviors is the foundation of human reliability. This human element is crucial to your equipment and process reliability.

1207_culture1

Process-oriented organizations drive value by improving their business processes and equipment performance. At the same time, however, a number of applications, including asset management, work process improvement, defect elimination and preventive maintenance, among others, can be powerful but incomplete applications when seeking to sustain a competitive edge.

To implement and sustain high-performing, reliable cultures, managers need to be as rigorous about diagnosing, designing and implementing changes to the human decision-making process as they are with their business and equipment processes. Equipment and process reliability ultimately rest with human reliability. Thus, cultural change at its deepest level requires examining human reasoning and its resulting decisions.

To establish a culture-of-reliability requires going beyond the traditional stew of copycat approaches and learning how to: (1) use actionable tools to implement and sustain reliability improvements and bottom-line impact by (2) collecting cultural action data and (3) learning how to use that data to uncover hidden bottlenecks to performance.

In the quest for high performance, well-intentioned managers often launch cultural change efforts using what they believe to be applied methods, like employee surveys, team building, empowerment, leadership style, systems thinking, formal performance appraisal, 360° feedback, you name it, only to be disillusioned in the end by the fact that more change efforts fail than succeed. Although they may be well-accepted, traditional change methods are not precise enough to create and sustain cultures-of-reliability and typically evolve into the next flavor of the month.

The learning exercise
For the past 16 years I have been conducting a specific learning exercise related to cultural change. The purpose is to help participants understand why implementation is so hard. There are five objectives for the session:

  1. To discover root cause of implementation barriers;
  2. To illustrate the interdependent relationship between learning and error;
  3. To determine how participants personally feel when they make mistakes;
  4. Based on their experience of error, to understand how humans design a culture-in-action to avoid errors and mistakes; and
  5. To determine the costs of error avoidance to business and human dignity.

To start, participants construct a definition of competitive learning which, at its root, is defined as the detection and correction of mistakes, errors, variance, etc., at ever-increasing rates of speed and precision—the heart of reliability. Through poignant illustrations, they learn that their organizations tend to focus on making fast decisions (“time is money”), timelines, milestones etc., but at a cost to precision, the quality of the decision.

Based on that definition, the participants are asked to reflect on a recent performance mistake they have made on the job or in life. The response from hundreds of them—male and female, Fortune 500 executives, managers, supervisors, engineers, technicians and craftsmen—are very consistent. When they make an error they feel: shame, anger, frustration, stupid, embarrassed, inadequate with an impulse to hide the error and, at the same time, a desire to fix it. The result is an emotionally charged picture of wanting to fix mistakes coupled with an overwhelming response to hide them for fear of blame.

As the exercise unfolds, participants gain insight into how learning and mistakes, trial and error shape performance and how ineffective learning patterns persist for years. For example, individuals from process industries have revealed they’ve known that less-than-effective outages and turnarounds have existed for years; that “lessons-learned” sessions don’t successfully address operations and maintenance infighting and squabbles over what quality work means and the validity of data; that stalled work management initiatives or reprisals for management decisions are a fact of life; etc. The list goes on and on. Discovering why his division had not been able to penetrate a market for over 20 years, one vice president-level participant summed up the dilemma this way: “The costs [of ineffective learning] are so high, they are un-estimateable.”

Through collective reflection in a larger group, participants come to realize that they all experience learning in very similar ways. They also come to learn that their reasoning is very similar. They typically espouse that continuous learning is important and mistakes are OK, but, in the final analysis, mistakes are categorized as critical incidents on performance appraisals or simply seen as ineffectiveness.

When performance appraisal is tied to pay, rewards and promotion, participants indicate that they would have to be foolish, if they “didn’t put the best spin” and save face at any cost. “I have a mortgage to pay” is how many respondents put it. At the same time, they acknowledge learning does occur, but at a rate that leaves much to be desired. “It’s not all bad,” is how many participants put it. Yet, this is not really a case of being bad. Rather, it is a case of sincere, hard-working people unknowingly designing a culture with a set of unintended outcomes.

At this point, participants begin to gain insight: they say one thing and do another. Moreover, they come to understand that it is easy to see defensive patterns in others, but not so easy to see defensive patterns in themselves. Not surprising, being defensive is espoused as not ok. Hence, good team players should be open to feedback. Not being open would be admitting a mistake, the very essence of pain.

In the final phase of the learning exercise, participants come to recognize that they have a strong desire to learn and they seek noble goals, but that fears of retribution for telling the truth, blame, fear of letting someone down or fear of failure, whether in substance or perception, contribute to a sense of loss of control. Unfortunately, this situation violates the first commandment of management: BE IN CONTROL.

The need for control translates into a hidden performance bottleneck, given the complexity of job interdependencies and systemic error. As one individual noted, “I can’t control what I can’t control, but I am held accountable. Accountability translates into who to blame.” Participants acknowledge that they subtly side-step difficult issues and focus on the more routine, administrative issues, thereby reducing emotional pain and conflict in the short term. They acknowledge that they bypass the potential for higher performance by not reflecting on gaps in decision-making.

Ironically, as these decision bottlenecks limit performance, expectations for better performance increase, often resulting in unrealistic timelines and more stress. Executives complain they just don’t get enough change fast enough, and middle managers and individual contributors complain of “micro-management.” Sound familiar?

The end result is that sincere attempts to improve the status quo slowly are cocreatively undermined and inadequate budgets and unrealistic timeframes are set. Good soldiers publicly salute the goals, but privately resist because their years of experience have taught them to think in terms of “what’s the use of telling the truth as I see it; this, too, will pass.” Ultimately, many see the “other guy(s)” or group as the problem and wonder why we can’t “get them” in line. This is the heart of an organizational fad—something that often is labeled as the lack of accountability.

Culture-in-action
Based on participants’ data generated from this learning exercise and action data recorded and collected from the field (see Part III of this series for the data collection method), a culture-inaction model, similar to that shown in Fig. 1, is created and verified with illustrations. Participants consistently agree this type of model is accurate and reflects their own current cultures-in-action.

Underlying assumptions…
The culture-in-action model is rooted in human reasoning. Given the assumptions of avoiding mistakes and being in control to win and look competent in problem resolution, the reasoning path is clear. The behaviors make perfectly good sense.

1207_culture_fig11Behavior…
When seeking solutions, multiple perspectives will proliferate on which solution is best, some with more risk, some with less. Think of it as inference stacking. A complex web of cause and effect, solutions and reasons why something will or will not work are precariously stacked one upon the other, up to a dizzying height.

Determining whose perspective is right is problematic (“Your guess is as good as mine”). Hence, controlling the agenda to reduce frustration either by withholding information (“Don’t even go there”) or aggressively manipulating people to submit or comply with someone else’s views to get things done is a logical conclusion based on the underlying assumptions.

It is not surprising that executives seek to control their organizations and focus on objectives—and when they do this that middle managers privately feel out of control because they think they are not trusted to implement initiatives or handle day-to-day routines. This leads to the following managerial dilemma: If I voice my real issues, I will not be seen as a good team player. If I stay silent, I will have to pretend to live up to unrealistic expectations. Either way is no win (a real double bind).

To overcome this dilemma, people verify and vent their emotions one-on-one, i.e. in hallways, restrooms and offices. This way, they avoid confronting the real issue of how they are impacted by others, which is diffi- cult to discuss in a public forum (“Don’t want to make a career-threatening statement”). Instead, they seek thirdparty validation that their beliefs are the right ones to hold (“Hey, John, can you believe what just happened in that meeting? I don’t think that strategy is going to work; didn’t we try it 10 years ago?”). Even the best-performing teams demonstrate some of these performance-reducing characteristics. The culture becomes laden with attributions about others’ motivation, intent and effectiveness and it is labeled “politics.”

Results…
Routine problems often are uncovered, organizations do learn, but the deeper performance bottlenecks, hidden costs, sources of conflict and high-performance opportunities are missed because the focus is on putting the “best spin” on “opportunities for improvement” with a twist of language to avoid the “mistake” word. That’s because mistakes are bad and people don’t like to discuss them. Interestingly enough, there are even objections to using the word “error” during the process of the exercise. It is not surprising that when trying to learn and continuously improve a turnaround, business process or project, for example, people privately will conclude “Oh, boy, here we go again. Another wasted meeting debating the same old issues.” Negative attributions proliferate (“They don’t want to learn”) and underlying tension grows.

At this stage of the process, the pattern begins to repeat itself. As the project effort falls behind, expectations build. Typically, someone will be expected to “step up” and be the hero. With eyes averted, looking down, uncomfortable silence, someone “steps up” and often gets rewarded. Yet this heroic reward doesn’t address root cause (i.e. what accounted for the errors and frustration in the first place). Side-stepping or avoiding the more difficult-to-discuss issues don’t help uncover root cause, but, rather, lead to fewer errors being discovered. As a result, the business goal is pushed a little further out and economic vulnerability is increased.

If the market is robust, errors and mistakes may mean little to a business. The demand can be high if you have the right product, at the right time. As competition increases, however, or the market begins to falter, the ability to remain competitive and achieve what the organization has targeted is crucial. Competitive learning is the only weapon an organization has to maintain its edge in the marketplace.

Major culture-in-action features
In summary, the major features of a true culture-in-action are:

  • Avoidance of mistakes and errors at all cost;
  • Little active inquiry to test negative attributions;
  • Little personal reflection (i.e. “How am I a part of the problem?”);
  • Little discussion of personal performance standards by which we judge others; and
  • Little agreement on what valid data would look like.

As the exercise winds down, it’s not long before someone asks, “So how do you get out of this status quo loop?” When this question comes, because it always does, I turn it back to the group and ask how they would alter this cultural system? The reaction is always the same—silence and stares. No wonder. The answer is not intuitively obvious, even to the most seasoned of practitioners and theorists.

The short answer is rather than “get” anyone anywhere, change has to be based on individual reflection and actionable tools driven through collaborative design and invitation. These actionable tools balance the playing field, at all levels, by helping create informed choice through daily decision-making reflection. Traditional intervention methods focus on changing behavior, learning your style or type, building a vision, etc. There are any number of approaches, all very powerful but incomplete without addressing the underlying reasoning (root cause) that is informing the behavior in the first place.

Coming next month
In Part II, a culture of reliability will be defined, as well as the role of reflection in organizational performance and the actionable tools of collaborative design. MT


Brian Becker is a senior project manager with Reliability Management Group (RMG), a Minneapolis-based consulting firm. With 27 years of business experience, he has been both a consultant and a manager. Becker holds a Harvard doctorate with a management focus. For more information, e-mail: bbecker@rmgmpls.com

Continue Reading →

275

8:09 pm
April 29, 2009
Print Friendly

Going Wireless: Wireless Technology Is Ready For Industrial Use

Wireless works in a plant, but you’ll want to be careful regarding which “flavor” you choose

Wireless Technology now provides secure, reliable communication for remote field sites and applications where wires cannot be run for practical or economic reasons. For maintenance purposes, wireless can be used to acquire condition monitoring data from pumps and machines, effluent data from remote monitoring stations, or process data from an I/O system.

For example, a wireless system monitors a weather station and the flow of effluent leaving a chemical plant. The plant’s weather station is 1.5 miles from the main control room. It has a data logger that reads inputs from an anemometer to measure wind speed and direction, a temperature gauge and a humidity gauge. The data logger connects to a wireless remote radio frequency (RF) transmitter module, which broadcasts a 900MHz, frequency hopping spread spectrum (FHSS) signal via a YAGI directional antenna installed at the top of a tall boom located beside the weather station building. This posed no problem.

However, the effluent monitoring station was thought to be impossible to connect via wireless. Although the distance from this monitoring station to the control room is only one-quarter mile, the RF signal had to pass through a four-story boiler building. Nevertheless, the application was tested before installation, and it worked perfectly. The lesson here is that wireless works in places where you might think it can’t. All you have to do is test it.

There are many flavors of wireless, and an understanding is needed to determine the best solution for any particular application.Wireless can be licensed or unlicensed, Ethernet or serial interface, narrow band or spread spectrum, secure or open protocol,Wi-fi…the list goes on. This article provides an introduction to this powerful technology.

The radio spectrum
The range of approximately 9 kilohertz (kHz) to gigahertz (GHz) can be used to broadcast wireless communications. Frequencies higher than these are part of the infrared spectrum, light spectrum, X-rays, etc. Since the RF spectrum is a limited resource used by television, radio, cellular telephones and other wireless devices, the spectrum is allocated by government agencies that regulate what portion of the spectrum may be used for specific types of communication or broadcast.

In the United States, the Federal Communications Commission (FCC) governs the allocation of frequencies to non-government users. FCC has limited the use of Industrial, Scientific, and Medical (ISM) equipment to operate in the 902-928MHz, 2400-2483.5MHz and 5725-5875MHz bands,with limitations on signal strength, power, and other radio transmission parameters. These bands are known as unlicensed bands, and can be used freely within FCC guidelines. Other bands in the spectrum can be used with the grant of a license from the FCC. (Editor’s Note: For a quick definition of the various bands in the RF spectrum, as well as their uses, log on to: http://encyclopedia.thefreedictionary. com/radio+frequency )

Licensed or unlicensed
A license granted by the FCC is needed to operate in a licensed frequency. Ideally, these frequencies are interference-free, and legal recourse is available if there is interference. The drawbacks are a complicated and lengthy procedure in obtaining a license, not having the ability to purchase off-the-shelf radios since they must be manufactured per the licensed frequency, and, of course, the costs of obtaining and maintaining the license.

goingwireless2

License-free implies the use of one of the frequencies the FCC has set aside for open use without needing to register or authorize them. Based on where the system will be located, there are limitations on the maximum transmission power. For example, in the U.S., in the 900MHz band, the maximum power may be 1 Watt or 4 Watts EIRP (Effective Isotropic Radiated Power).

The advantages of using unlicensed frequencies are clear: no cost, time or hassle in obtaining licenses; many manufacturers and suppliers who serve this market; and lower startup costs, because a license is not needed. The drawback lies in the idea that since these are unlicensed bands, they can be “crowded” and, therefore, may lead to interference and loss of transmission. That‘s where spread spectrum comes in. Spread spectrum radios deal with interference very effectively and perform well, even in the presence of RF noise.

Spread spectrum systems
Spread Spectrum is a method of spreading the RF signal across a wide band of frequencies at low power, versus concentrating the power in a single frequency as is done in narrowband channel transmission. Narrowband refers to a signal which occupies only a small section of the RF spectrum, whereas wideband or broadband signal occupies a larger section of the RF spectrum. The two most common forms of spread spectrum radio are frequency hopping spread spectrum (FHSS), and direct sequence spread spectrum (DSSS). Most unlicensed radios on the market are spread spectrum.

As the name implies, frequency hopping changes the frequency of the transmission at regular intervals of time. The advantage of frequency hopping is obvious: since the transmitter changes the frequency at which it is broadcasting the message so often, only a receiver programmed with the same algorithm would be able to listen and follow the message. The receiver must be set to the same pseudo-random hopping pattern, and listen for the sender’s message at precisely the correct time at the correct frequency. Fig. 1 shows how the frequency of the signal changes with time. Each frequency hop is equal in power and dwell time (the length of time to stay on one channel). Fig. 2 shows a two dimensional representation of frequency hopping, showing that the frequency of the radio changes for each period of time. The hop pattern is based on a pseudo random sequence.

goingwireless3

DSSS combines the data signal with a higher data-rate bit-sequence-also known as a ‘chipping code’-thereby “spreading” the signal over greater bandwidth. In other words, the signal is multiplied by a noise signal generated through a pseudo-random sequence of 1 and -1 bits. The receiver then multiplies the signal by the same noise to arrive at the original message (since 1 x 1 = 1 and -1 x -1 = 1).

When the signal is “spread,” the transmission power of the original narrowband signal is distributed over the wider bandwidth, thereby decreasing the power at any one particular frequency (also referred to as low power density). Fig. 3 shows the signal over a narrow part of the RF spectrum. In Fig. 4, that signal has been spread over a larger part of the spectrum, keeping the overall energy the same, but decreasing the energy per frequency. Since spreading the signal reduces the power in any one part of the spectrum, the signal can appear as noise. The receiver must recognize this signal and demodulate it to arrive at the original signal without the added chipping code. FHSS and DSSS both have their place in industry and can both be the “better” technology based on the application. Rather than debating which is better, it is more important to understand the differences, and then select the best fit for the application. In general, a decision involves:

  • Throughput
  • Colocation
  • Interference
  • Distance
  • Security

Throughput
Throughput is the average amount of data communicated in the system every second. This is probably the first decision factor in most cases. DSSS has a much higher throughput than FHSS because of a much more efficient use of its bandwidth and employing a much larger section of the bandwidth for each transmission. In most industrial remote I/O applications, the throughput of FHSS is not a problem.

As the size of the network changes or the data rate increases, this may become a greater consideration. Most FHSS radios offer a throughput of 50-115 kbps for Ethernet radios.Most DSSS radios offer a throughput of 1-10 Mbps. Although DSSS radios have a higher throughput than FHSS radios, one would be hard pressed to find any DSSS radios that serve the security and distance needs of the industrial process control and SCADA market. Unlike FHSS radios, which operate over 26MHz of the spectrum in the 900MHz band (902-928MHz), and DSSS radios, which operate over 22MHz of the 2.4GHz band, licensed narrow band radios are limited to 12.5kHz of the spectrum.Naturally, as the width of the spectrum is limited, the bandwidth and throughput will be limited as well.Most licensed frequency narrowband radios offer a throughput of 6400 to 19200 bps.

Collocation
Collocation refers to having multiple independent RF systems located in the same vicinity. DSSS does not allow for a high number of radio networks to operate in close proximity as they are spreading the signal across the same range of frequencies. For example, within the 2.4GHz ISM band, DSSS allows only three collocated channels. Each DSSS transmission is spread over 22MHz of the spectrum, which allows only three sets of radios to operate without overlapping frequencies.

FHSS, on the other hand, allows for multiple networks to use the same band because of different hopping patterns. Hopping patterns which use different frequencies at different times over the same bandwidth are called orthogonal patterns. FHSS uses orthogonal hopping routines to have multiple radio networks in the same vicinity without causing interference with each other. That is a huge plus when designing large networks, and needing to separate one communication network from another. Many lab studies show that up to 15 FHSS networks may be collocated, whereas only 3 DSSS networks may be collocated. Narrowband radios obviously cannot be collocated as they operate on the same 12.5MHz of the spectrum.

Interference
Interference is RF noise in the vicinity and in the same part of the RF spectrum. A combining of the two signals can generate a new RF wave or can cause losses or cancellation in the intended signal. Spread Spectrum in general is known to tolerate interference very well, although there is a difference in how the different flavors handle it.When a DSSS goingwireless4receiver finds narrowband signal interference, it multiplies the received signal by the chipping code to retrieve the original message. This causes the original signal to appear as a strong narrow band; the interference gets spread as a low power wideband signal and appears as noise, and thus can be ignored.

In essence, the very thing that makes DSSS radios spread the signal to below the noise floor is the same thing that allows DSSS radios to ignore narrowband interference when demodulating a signal. Therefore, DSSS is known to tolerate interference very well, but it is prone to fail when the interference is at a higher total transmission power, and the demodulation effect does not drop the interfering signal below the power level of the original signal.

Given that FHSS operates over 83.5MHz of the spectrum in the 2.4GHz band, producing high power signals at particular frequencies (equivalent to having many short synchronized bursts of narrowband signal) it will avoid interference as long as it is not on the same frequency as the narrowband interferer.Narrowband interference will, at most, block a few hops which the system can compensate for by moving the message to a different frequency. Also, the FCC rules require a minimum separation of frequency in consecutive hops, and therefore the chance of a narrowband signal interfering in consecutive hops is minimized.

When it comes to wideband interference, DSSS is not so robust. Since DSSS spreads its signal out over 22MHz of the spectrum all at once at a much lower power, if that 22MHz of the spectrum is blocked by noise or a higher power signal, it can block 100% of the DSSS transmission, although it will only block 25% of the FHSS transmission. In this scenario, FHSS will lose some efficiency, but not be a total loss.

In licensed radios the bandwidth is narrow, so a slight interference in the range can completely jam transmission. In this case, highly directional antennas and band pass filters may be used to allow for uninterrupted communication, or legal action may be pursued against the interferer.

802.11 radios are more prone to interference since there are so many readily available devices in this band. Ever notice how your microwave interferes with your cordless phone at home? They both operate in the 2.4GHz range, the same as the rest of 802.11 devices. Security becomes a greater concern with these radios.

If the intended receiver of a transmitter is located closer to other transmitters and farther from its own partner, it is known as a Near/Far problem. The nearby transmitters can potentially drown the receiver in foreign signals with high power levels. Most DSSS systems would fail completely in this scenario. The same scenario in a FHSS system would cause some hops to be blocked but would maintain the integrity of the system. In a licensed radio system, it would depend on the frequency of the foreign signals. If they were on the same or close frequency, it would drown the intended signal, but there would be recourse for action against the offender unless they have a license as well.

Distance
Distance is closely related to link connectivity, or the strength of an RF link between a transmitter and a receiver, and at what distance they can maintain a robust link. Given that the power level is the same, and the modulation technique is the same, a 900MHz radio will have higher link connectivity than a 2.4GHz radio. As the frequency in the RF spectrum increases, the transmission distance decreases if all other factors remain the same. The ability to penetrate walls and object also decreases as the frequency increases.Higher frequencies in the spectrum tend to display reflective properties. For example, a 2.4GHz RF wave can bounce off reflective walls of buildings and tunnels. Based on the application, this can be used as an advantage to take the signal farther, or it may be a disadvantage causing multipath, or no path, because the signal is bouncing back.

FCC limits the output power on spread spectrum radios. DSSS consistently transmits at a low power, as discussed above, and stays within the FCC regulation by doing so. This limits the distance of transmission for DSSS radios, and thus this may be a limitation for many of the industrial applications. FHSS radios, on the other hand, transmit at high power on particular frequencies within the hopping sequence, but the average power on the spectrum is low, and therefore can meet with the regulations. Since the actual signal is transmitting at a much higher power than the DSSS, it can travel further.Most FHSS radios are capable of transmitting over 15 miles, and longer distances with higher gain antennas.

802.11 radios, although available in both DSSS as well as FHSS, have a high bandwidth and data rate, up to 54Mbps (at the time of this publication). But it is important to note that this throughput is for very short distances, and downgrades very quickly as the distance between the radio modems increases. For example, a distance of 300 feet would drop the 54Mbps rate down to 2Mbps. This makes this radio ideal for a small office or home application, but not for many industrial applications where there is a need to transmit data over several miles.

Since narrowband radios tend to be a lower frequency, they are a good choice in applications where FHSS radios cannot provide adequate distance. A proper application for narrow band licensed radios is when there is a need to use a lower frequency to either travel over a greater distance, or be able to follow the curvature of the earth more closely and provide link connectivity in areas where line of sight is hard to achieve.

Security
Since DSSS signals run at such low power, the signals are difficult to detect by intruders. One strong feature of DSSS is its ability to decrease the energy in the signal by spreading the energy of the original narrowband signal over a larger bandwidth, thereby decreasing the power spectral density. In essence, this can bring the signal level below the noise floor, thereby making the signal “invisible” to would-be intruders. On the same note, however, if the chipping code is known or is very short, then it is much easier to detect the DSSS transmission and retrieve the signal since it has a limited number of carrier frequencies. Many DSSS systems offer encryption as a security feature, although this increases the cost of the system and lowers the performance, because of the processing power and transmission overhead for encoding the message.

For an intruder to successfully tune into a FHSS system, he needs to know the frequencies used, the hopping sequence, the dwell time and any included encryption. Given that for the 2.4GHz band the maximum dwell time is 400ms over 75 channels, it is almost impossible to detect and follow a FHSS signal if the receiver is not configured with the same hopping sequence, etc. In addition, most FHSS systems today come with high security features such as dynamic key encryption and CRC error bit checking.

Today,Wireless Local Area Networks (WLAN) are becoming increasingly popular. Many of these networks use the 802.11 standard, an open protocol developed by IEEE.Wi-fiis a standard logo used by the Wireless Ethernet Compatibility Alliance (WECA) to certify 802.11 products. Although industrial FHSS radios tend to not be Wi-fi, and therefore not compatible with these WLANs, there may be a good chance for interference due to them operating in the same bandwidth. Since most Wi-fiproducts operate in the 2.4 or 5GHz bands, it may be a good idea to stick with a 900MHz radio in industrial applications, if the governing body allows this range (Europe allows only 2.4GHz, not 900MHz). This will also provide an added security measure against RF sniffers (a tool used by hackers) in the more popular 2.4 band.

Security is one of the top issues discussed in the wireless technology sector. Recent articles about “drive-by hackers” have left present and potential consumers of wireless technology wary of possible infiltrations. Consumers must understand that 802.11 standards are open standards and can be easier to hack than many of the industrial proprietary radio systems.

The confusion about security stems from a lack of understanding of the different types of wireless technology. Today, Wi-fi(802.11a, b, and g) seems to be the technology of choice for many applications in the IT world, homes and small offices. 802.11 is an open standard in which many vendors, customers and hackers have access to the standard.While many of these systems have the ability to use encryption like AES and WEP, many users forget or neglect to enable these safeguards which would make their systems more secure.Moreover, features like MAC filtering can also be used to prevent unauthorized access by intruders on the network. Nonetheless, many industrial end users are very wary about sending industrial control information over standards that are totally “open.”

So, how do users of wireless technology protect themselves from infiltrators? One almost certain way is to use non- 802.11 devices that employ proprietary protocols that protect networks from intruders. Frequency hopping spread spectrum radios have an inherent security feature built into them. First, only the radios on the network that are programmed with the “hop pattern” algorithm can see the data. Second, the proprietary, non-standard, encryption method of the closed radio system will further prevent any intruder from being able to decipher that data.

The idea that a licensed frequency network is more secure may be misleading. As long as the frequency is known, anyone can dial into the frequency, and as long as they can hack into the password and encryption, they are in. The added security benefits that were available in spread spectrum are gone since licensed frequencies operate in narrowband. Frequency hopping spread spectrum is by far the safest, most secure form of wireless technology available today.

Mesh radio networks
Mesh radio is based on the concept of every radio in a network having peer-topeer capability. Mesh networking is becoming popular since its communication path has the ability to be quite dynamic. Like the worldwide Web, mesh nodes make and monitor multiple paths to the same destination to ensure that there is always a backup communication path for the data packets.

There are many concerns that developers of mesh technology are still trying to address, such as latency and throughput. The concept of mesh is not new. The internet and phone service are excellent mesh networks based in a wired world. Each node can initiate communication with another node and exchange information.

0107_goingwireless_img8Summary
In conclusion, the choice of radio technology to use should be based on the needs of the application. For most industrial process control applications, proprietary protocol license-free frequency hopping spread spectrum radios (Fig. 5) are the best choice because of lower cost and higher security capabilities in comparison to licensed radios.When distances are too great for a strong link between FHSS radios with repeaters, then licensed narrowband radios should be considered for better link connectivity. The cost of licensing may offset the cost of installing extra repeaters in a FHSS system.

As more more industrial applications require greater throughput, networks employing DSSS that enable TCP/IP and other open Ethernet packets to pass at higher data rates will be implemented. This is a very good solution where PLCs (Programmable Logic Controllers), DCS (Distributed Control Systems) and PCS (Process Control Systems) need to share large amounts of data with one another or upper level systems like MES (Manufacturing Execution Systems) and ERP (Enterprise Resource Planning) systems.

When considering a wireless installation, check with a company offering site surveys that allow you to install radios at remote locations to test connectivity and throughput capability. Often this is the only way to ensure that the proposed network architecture will satisfy your application requirements. These demo radios also let you look at the noise floor of the plant area, signal strength, packet success rate and the ability to identify if there are any segments of the license free bandwidth that are currently too crowded for effective communication throughput. If this is the case, then hop patterns can be programmed that jump around that noisy area instead of through it. MT


Gary Mathur is an applications engineer with Moore Industries-International, in North Hills, CA. He holds Bachelor’s and Masters degrees in Electronics Engineering from Agra University, and worked for 12 years with Emerson Process Management before joining Moore. For more information on the products referenced in this article, telephone: (818) 894-7111; e-mail: GMathur@miinet.com

Continue Reading →

297

6:00 am
April 1, 2009
Print Friendly

Tracing Fan Vibration to Flexible Soil

In this case, the cause of machinery health problems really did start from the ground up.

The call came in to Mechanical Solutions, Inc. (MSI) from an electric power gen company. Excessive vibration was plaguing two newly installed Induced Draft (ID) fans at one of the company’s flagship generating stations. The two units, each driven by an electric induction motor rated well in excess of 5000 HP, were operated in parallel at a constant speed with dampers that controlled the downstream flow of air. Elevated ID fan vibration levels at one times (1x) the running speed had been reported by the station. In fact, this vibration had been substantial enough to exceed the fan trip levels during startups time and again. Unfortunately, traditional vibration analysis techniques had not been successful in helping to resolve what initially appeared to be a straightforward machinery unbalance and/or misalignment problem.

Testing
To get to the bottom of the fan vibration issue, MSI performed a combination of operating forced response testing and impact modal testing. The purpose was to collect vibration data at locations throughout the fan systems, including the fan bearing housings, bearing supports, bearing sole plates, concrete pedestals and concrete floor pads.
fan-vibration_msi

The impact modal testing was conducted—safely—while the ID fans were running, so as to determine the natural frequencies and the mode shapes while the bearings were energized— which accounted for the important sleeve bearing stiffnesses of the units. An instrumented impact hammer was used to excite each bearing housing in the axial, vertical and horizontal directions, while a set of accelerometers and two multiple-channel spectrum analyzers recorded the response data throughout each fan system. The amount of force put on the fan bearings, supports and foundations at each frequency was transmitted by the piezoelectric crystal contained within the head of the impulse hammer. This input force was divided into each of the acceleration responses to determine a frequency response function (FRF) between the locations/directions of the hammer impacts and the locations/directions of the responses at the accelerometers. The logarithm of each FRF was plotted versus the frequency—which allowed both the low and the high response modes to be inspected with equal clarity. The peaks in the FRF plots represented the natural frequencies of the fan-pedestal-floor structural systems. The impact modal testing also was used to determine the mode shapes of the fan systems at each natural frequency of vibration. Data for each of the impact modal tests was acquired at approximately 50 locations in the three orthogonal directions on the bearing housings, bearing supports, bearing sole plates, concrete pedestals and concrete floor pads.

 

A specialized operating forced response vibration technique also was utilized to record test data in three orthogonal directions throughout the fan systems. This allowed the data to be subsequently processed to produce detailed, animated operating deflection shapes (ODS) of the fan systems. The data was collected under maximum load conditions, where the maximum forced response was present in the fans. Each mode shape and ODS animation displayed the relative motion, i.e. the amplitude and the phase, at each measurement location on the structure at a selected frequency. The animations were beneficial because they illustrated the relative motions of the various system components in an exaggerated fashion, which encouraged the efficient identification of the root cause(s) of the vibration problem. Still images from the ODS animations for the two ID fans are presented in fig. 1.

Diagnosis
The collected test data confirmed both of the ID fans had excessive vibration amplitudes that occurred in the horizontal direction at the locations of the outboard bearing housings and outboard support pedestals. The vibration spectra showed several harmonics of the fan running speed, and the highest peaks occurred at 1x and 2x the running speed. The ODS test results displayed horizontal rocking motions of the inboard and the outboard pedestals in both of the ID fan installations (see fig. 1). There also was clear evidence of looseness of the bearing assemblies in both of the fans (e.g. the housing, support and soleplate, especially in the outboard bearing assemblies).

The ODS data for ID Fan A showed in-phase horizontal rocking motion that was driven by the rotor motion at both the inboard and outboard support pedestals. flexibility of the base mat or soil underneath the pedestals allowed the pedestals to develop this side-to-side rigid body motion.

The test data for ID Fan B illustrated a more pronounced rigid body side-to-side or “rocking” motion at the outboard bearing pedestal, because of 1x running speed excitation driven by the fan’s rotor. Again, flexibility of the base underneath the pedestals allowed this horizontal motion to occur. ID Fan B’s rotor also described a relatively large horizontal orbit in the outboard bearing due to the motion of the bearing housing. Though the relative displacement of the shaft in the outboard bearing was relatively small and within operating specifications, the absolute motion relative to ground was several times greater. ID Fan B showed approximately 20% more deflection at the outboard pedestal in the horizontal direction than did ID Fan A, and 50% less deflection at the inboard pedestal in the same direction. Based on this information, MSI concluded that the rotor critical speed had shifted downward toward the running speed due to the high flexibility of the outboard support pedestal. This shift caused the fan to operate at resonance with the running speed.

Further, an independent structural natural frequency of the outboard pedestal was identified near the running speed. The rotor critical speed and the structural resonance interacted with each other, and were the likely cause of the modulated orbits of ID Fan B’s rotor. Rotordynamic analysis showed that the lateral stiffening of the outboard pedestals would detune the 1x resonance sufficiently to decrease the amplitude of the vibration responses to acceptable levels.

Solution
Ultimately, the root cause of the excessive vibration was the flexibility of the soil that was beneath the base mats of the supporting pedestals of each ID fan. It was recommended that the horizontal stiffness of the outboard bearing pedestal and support assemblies—e.g. the base mats and the underlying soil—be increased substantially. This modification would de-tune the offending resonance condition of the rotor critical speed and the pedestal structural natural frequency to create sufficient margin versus the fan running speed. In addition, it was suggested that all of the bearing housing and support assemblies be tightened and stiffened as much as practical to minimize the overall vibration of the fans. It was also noted that efforts to reduce the vibration levels by improving the balance or the alignment of the units beyond the manufacturer’s recommended parameters would at best provide a limited and temporary solution to the problem. Exceptional balance and alignment levels generally cannot be maintained in plant rotating machinery on a practical basis—especially in cases when the rotor is exposed to fly ash that will accumulate easily during the routine operation of the machine. The root cause of the vibration problem, soil flexibility, would have been extremely difficult and very costly to trace without the benefit of the specialized and well-proven troubleshooting approach that was implemented in this case. On the other hand, a design audit by a qualified firm—before the ID fans were installed at the facility—could have been exploited to avoid this puzzling vibration problem altogether.

Maki Onari is manager of Turbomachinery Testing and Eric Olson is director of marketing for Mechanical Solutions, Inc. (MSI), headquartered in Whippany, NJ. MSI provides consulting and R&D services such machinery design, analysis and testing on a wide range of equipment, including electronic systems and all types of rotating, reciprocating and turbomachinery, among others, for end-users and OEM clients around the world. Telephone: (973) 326-9920.

For more info, enter 1 at www.MT-freeinfo.com

Continue Reading →

268

6:00 am
April 1, 2009
Print Friendly

My Take: Cash For Clunkers and Crush For Credit

 

jalexander

Jane Alexander, Editor-In-Chief

Tuesday, April 7, as I thought I was putting the finishing touches on this column about some exciting motor news, I took a few minutes to scroll through the online edition of the New York Times. In the Opinion section, an editorial entitled “Cash for Clunkers”* caught my attention. Its focus was on a movement in Congress to help take gas-guzzling clunkers off the nation’s roads and replace them with more fuel-efficient models. According to the editorial writer(s), while this idea has benefits for both the environment and our troubled auto industry, there’s a right and wrong way to get it done.
Continue Reading →

129

6:00 am
April 1, 2009
Print Friendly

MT News

News of people and events important to the maintenance and reliability community

BOB PAGANO APPOINTED PRESIDENT OF ITT INDUSTRIAL PROCESS BUSINESS
Robert J. Pagano, Jr. has been appointed president of Industrial Process (IP), one of the ITT Corporation fluid businesses headquartered in Seneca Falls, NY. He replaces Ken Napolitano, who has been named president of ITT’s Residential and Commercial Water value center headquartered in Morton Grove, IL. Pagano is not unfamiliar with his newly announced role. He actually began his ITT career with IP, holding a series of increasingly responsible positions over the years. He eventually went on to lead that business as president from 2002-2004, a particularly challenging period, during which he helped position IP for future growth. Most recently, he had been serving as vice president of Finance for ITT Corporation, and had been CFO for the Motion and Flow Control group. ITT Industrial Process manufactures and markets industrial pumps, valves, monitoring and control systems, water treatment products and aftermarket services globally under the Goulds Pumps®, Fabri-Valve®, PumpSmart®, C’treat® and PRO Services® brands. It has 18 manufacturing plants, 14 service facilities and 32 sales offices worldwide with more than 2200 employees.

JOHN CRANE ACQUIRES ORION BEARINGS ADDS UPSTREAM O&G PRODUCTION DIVISION
John Crane, a division of global technology business Smiths Group, has announced the purchase of Orion Corporation, a leading U.S.-based designer and manufacturer of hydrodynamic bearings for energy and general industrial markets. Headquartered in Grafton, WI, Orion complements and extends John Crane Bearing Technology, a business unit formed following the corporation’s 2007 acquisition of Sartorius Bearing Technology (SBT), based in Gottingen, Germany. Orion designs and manufactures hydrodynamic bearings for high-speed turbine, generator, compressor and gear-drive applications for the power gen, oil and gas and general industrial markets. Employing 270 people at its Wisconsin and Nebraska facilities, it reported sales of approximately $50 million for its 2008 fiscal year ending October 31.

On a related note, John Crane, already a leading supplier to the refining market, has recently added a new Production Solutions division to serve the upstream part of the oil and gas industry (oil and gas recovery, with respect to optimization of the well). Led by Tom Whipple, president, the Production Solutions division is currently made up of CDI Energy Services and Fiberod, two Texas-based companies. CDI is one of the largest artificial lift service companies in North America. Fiberod is a leader in innovative fiberglass sucker rod (FSR) technology.

EMERSON EXPANDS ITS ONLINE MACHINERY HEALTH CAPABILITIES
Emerson has acquired epro GmbH (epro), a privately held Gronau, Germany-based company that engineers, manufactures and assembles API 670-compliant protection systems delivered to the process industries worldwide. The deal expands Emerson’s online machinery monitoring capability with a full API 670-compliant protection offering. It is also expected to speed availability of next generation solutions. Terms of the deal were not announced.

EXXONMOBIL KEEPS BOOSTING GLOBAL COGENERATION CAPACITY
ExxonMobil recently inaugurated its newest high-efficiency cogeneration plant at its Antwerp refinery in Belgium.According to the company, this facility is more efficient than many traditional cogeneration plants because of its heat recovery system. In addition to generating steam, the cogeneration operation utilizes heat created in the gas-turbine exhaust to heat crude oil, the initial step in the process of converting crude oil into refined products. The unit will generate 125 megawatts and reduce Belgium’s carbon dioxide emissions by approximately 200,000 tons annually—the equivalent of removing about 90,000 cars from Europe’s roads.

“This new cogeneration plant allows for the efficient generation of electricity to run pumps, compressors and other equipment in our facilities, while at the same time, producing additional steam that is needed in processes that transform crude oil into refined products,” notes Gilbert Asselman, manager of the Antwerp refinery. “With the latest technology, cogeneration is significantly more efficient than traditional methods of producing steam and power separately. This results in lower operating costs and significantly less greenhouse gas emissions.”

With the launch of the Antwerp facility, ExxonMobil now has interests in about 4600 megawatts of cogeneration capacity in about 100 individual installations at more than 30 sites worldwide. New facilities under construction in Singapore and China will increase ExxonMobil’s cogeneration capacity to more than 5000 megawatts in the next three years.

ASSOCIATION NEWS
SMRP LAUNCHES SUPPLIER PARTNER PROGRAM

The SMRP Board of Directors of the Society for Maintenance and Reliability Professionals (SMRP) has approved a program to develop programs, products and services in partnerships with its members. Known as the “Member Affinity/Partner Program,” it will allow supplier members to partner with SMRP to develop and implement programs, products and services to meet member needs. Supplier members that are interested in partnering with SMRP to provide a program, product or service to the industry or profession should submit a written proposal, including program features, benefits, cost conditions and the details to the Improve Member Services Committee. The Committee will screen the program against some very basic criteria, and, if necessary, the member will fund research among SMRP audiences to determine if there is interest in the program. The committee and staff will analyze research results and if a sufficient level of audience interest is evident, will present the proposed program to the Board of Directors. The first program of this type to be approved is with ABB Reliability Services to provide a series of workshops at no cost to SMRP members. For additional information, or to submit a proposal for consideration, e-mail info@smrp.org.

YOUR NEWS IS OUR NEWS! OUR READERS WANT TO KNOW ALL ABOUT IT. SEND MT NEWS ITEMS TO: jalexander@atpnetwork.com

Continue Reading →

3

6:00 am
April 1, 2009
Print Friendly

Uptime: Factory Jobs Anyone?

 

bwilliamson

Bob Williamson, Contributing Editor

Imagine that you’re 15 years old. You like new technology. You’re a whiz at strategy games. You built your own computer and set up a wireless network. You’re yearning for your first car. You have fleeting thoughts what you would like to do when you grow up. Earn big bucks! A job? A career? Go to college? Why? All they seem to care about any more in school is math and science. You would like to do things, make things, build things, solve puzzles and problems, figure things out, investigate.
Continue Reading →

193

6:00 am
April 1, 2009
Print Friendly

Viewpoint: Aligning The Right People For Profitability

jeff_shiver

Jeff Shiver, CMRP, CPMM, Managing Principal, People and Processes, Inc.

Studies have shown that many organizations suffer from a self-induced failure rate upwards of 70% in equipment reliability processes and practices. These failures result from sources such as operator error, management, purchasing and maintenance methods, to name a few. More rarely considered as a reason for failure is organizational alignment and structure—yet it can have significant financial and functional impact. This is especially true when Maintenance and Operations are decentralized and using high-performance team concepts with little or no direct supervision, and no real centralized support functions such as Planning and Scheduling.

While many manufacturing organizations abandoned the high-performance team concept over 15 years ago, not everyone followed suit. Some organizations in the last few years have chosen to revisit and pursue the concept with the great intent of empowering people down to the lowest levels. The few supervisors who remain in the organization become “coaches” since it is the team that now makes the decisions. Often, the reality is that the response time for making significant decisions slows greatly. Consider that one high-performing team organization worked with a consultant for over a year with weekly meetings just to determine (via a voting process) if they were going to plan and schedule maintenance activities.

Although there are advantages to the high-performance team concept, the disadvantages for the Maintenance group and—ultimately—for the organization as a whole, outweigh them. Consider the fact that teams don’t like to share ‘parts’ of people, especially craftspeople. Because craftspeople never work in other areas, they have no knowledge of ‘site’ or other area equipment. Because the ‘team lead’ responsibilities change every day or week, there is no continuity in direction other than getting product out the door. Loss of direct supervision skilled in maintenance is one of the first casualties. This is quickly followed by the loss of Maintenance Planning and Scheduling with the focus on production goals of the craftspeople. In light of no planned work and no preventive maintenance, equipment reliability suffers. Costs go up. Since we can’t use the craftspeople across different areas, we must staff shutdowns and other activities with contractors. Craftspeople living the cycle of reactive chaos become disenfranchised and leave the organization. As equipment reliability falls, so does the profitability due to increased downtime and ensuing loss of capacity.

Setting up the right organizational alignment to thrive and profit starts with educating leadership. A proactive team culture requires effective Maintenance Planning and Scheduling; knowledgeable and dedicated craftspeople that have direct supervision (ideally with craft skills); a Maintenance Engineering (not Project Engineering) function; and a partnership with other stakeholders such as Operations. When these functions are properly staffed and supported with mutually beneficial partnerships, you are well on the way to creating a winning team that balances empowerment with profitability.

As a long-time maintenance practitioner and now as managing principal of People and Processes, Inc., based in Yulee, FL, Jeff Shiver has educated and assisted hundreds of people and numerous organizations in implementing Best Practices for Maintenance and Operations. E-mail: jshiver@peopleandprocesses.com

The opinions expressed in this Viewpoint section are those of the author, and don’t necessarily reflect those of the staff and management of MAINTENANCE TECHNOLOGY magazine. Continue Reading →

221

6:00 am
April 1, 2009
Print Friendly

Lubrication Checkup: A First Step Toward Wellness

Symptom:
“We are a small manufacturing company with a maintenance staff of 8 persons. We currently support a loosely put together lubrication program, but continue to experience many premature bearing failures. We recognize the [lubrication] program is probably not very effective, and would like some suggestions on where to start our improvement efforts.”

Diagnosis:
Getting the most out of your lubrication program requires an understanding of lubrication fundamentals. Performing a “back to basics” examination of your current program will ensure that it is built on a solid foundation and deriving optimized benefits from effective lubrication.

Prescription:
Examining the following two “basic” areas will help you in determining if you have major problems with your lubrication program and give you a great starting point for improvement:

1. Cleanliness – The old adage “cleanliness is next to godliness” must be the mantra of the day when working with lubrication. Bearing surface areas and lubrication systems are NOT dirt tolerant. Poor work practices and dirty lubricant storage/handling tools and areas are responsible for many premature bearing failures. Develop a cleaning regimen as part of the PM task. Ensure the lubricant reservoirs and lubricant delivery devices are always kept scrupulously clean.

2. Over-lubrication – Most bearings and motors are actually “killed with kindness” by over-zealous maintainers over-lubricating bearings. Telltale signs include:

a. Blown seals – A seal is no match for the pressure of a grease gun in an untrained hand.

b. Oceans of grease surrounding or dripping from the bearing – “More is better” is a false assumption when it comes to lubrication.

c. Multiple non-standard grease guns still in use – No two grease guns are alike; they all have different pressure ratings and delivery speci? cations.

d. Subjective PMs stating “lubricate as necessary” – Rarely will two individuals’ ideas as to the necessary amount be the same.

e. Grease-packed motor armatures – Many motors expire prematurely from over-lubrication.

On your organization’s road to lube wellness, it’s important to examine for evidence of these signs. Then, seek help from a Lubrication Engineering specialist to provide proper training and assistance in realigning your lubrication program.

 


Have lubrication questions? Contact Dr. Lube, aka Ken Bannister, specializes in helping companies throughout industry implement practical and successful lubrication management programs. The noted author of the best-selling book Lubrication for Industry and of the 28th edition Machinery’s Handbook section on Lubrication, he also is, among other things, a contributing editor to both Maintenance Technology and Lubrication Management & Technology magazines. E-mail: doctorlube@atpnetwork.com

Continue Reading →

Navigation