Author Archive | Maintenance Technology


4:30 pm
July 18, 2016
Print Friendly

Final Thought: What’s Your Elevator Pitch?

Cklaus01By Dr. Klaus M. Blache, Univ. of Tennessee Reliability & Maintainability Center

The terms “elevator talk” or “elevator pitch” refer to a brief presentation or explanation delivered in the time it typically takes to ride an elevator from one floor to another, i.e., anywhere from 30 seconds to a few minutes. Savvy people in all walks of life have them ready on key topics to efficiently and effectively get their points across to others. So how do we explain reliability engineering in an “elevator talk?”

To another engineer, my pitch would go something like this: “Reliability is the likelihood that process/product/people will carry out their stated functions for the specified time interval when operated according to the designated conditions. Maintainability is the ease and speed of maintenance to get the system back to its original operating conditions. Availability is being ready for use as intended. Since availability is a function of reliability and maintainability, reliability engineers work on improving both throughout the lifecycle of assets and products.”

If that discussion were to go well and time permitted, I would go on to explain that a comprehensive reliability process can be used to perform continuous improvement and enable any organization to attain top quartile performance.

A definition from is, “Reliability engineering emphasizes dependability in the lifecycle management of a product… Reliability engineering deals with estimation, prevention, and management of high levels of lifetime engineering-uncertainty and risks of failure.”

A generic definition might be, “Reliability engineering enables an asset to perform its intended function without failure for the specified time, when built, installed, and operated as designed.”

Visit and you will find, “Principles and practices associated with reliability requirements (such as prediction of failure time and conditions) and their translation into specifications that are incorporated in product design and production.”

All of these definitions, however, assume a level of knowledge of the referenced concepts on the part of the audience. Also, by using broad definitions, much is left to individual interpretation. Explaining our work to non-engineers can be tough.

At a recent social event, a lawyer asked me what I do. When I answered “reliability engineering,” he asked what that meant. After 10 minutes of explanations, it was clear he still wasn’t close to understanding the importance or relevance of the field, or what it is. Spending about five more minutes trying to clarify things for him, I came to realize that even with all I know about reliability, I still needed an elevator talk for non-engineers. Here’s what I’ve come up with:

“If your car starts every time you need it and gets you to your destination, it has high reliability.  If your car can be quickly and properly maintained (preserved in a like-new state) when something does go wrong, it reflects good maintainability. Because of high reliability and good maintainability, your car is available whenever you need it. Reliability engineering uses calculations, tools, and techniques to evaluate the risks of human and asset failure and avoid related consequences. This applies to everything from a single component to an overall production process. These concepts are applied to the machinery, equipment, and facilities that produce products such as cars, chemicals, steel, food, energy, aircraft, spacecraft, and household goods. Because it can improve so many parts of any organization, reliability engineering is an ongoing process.”

Reliability is so all-inclusive in what it can positively affect, that our attempts to explain it often seem vague. Conversely, using only a single example makes it sound too simplistic.

If you have a good reliability-engineering elevator talk (for delivery to non-engineers), please send it to me. I would like to hear it. MT

Klaus M. Blache is director of the Reliability & Maintainability Center at the Univ. of Tennessee, Knoxville, and a College of Engineering research professor. Contact him at


7:06 pm
July 13, 2016
Print Friendly

Calculate the Impact of Unreliability On Sales

While most acknowledge that unreliable operation is costly at the plant level, the impact, when projected to sales, is enormous.

By Al Poling, CMRP

Generally speaking, manufacturing personnel understand the effect unreliability has on maintenance. Unreliability requires more maintenance resources and materials to repair failed equipment as well as increased maintenance capital spending caused by the need to replace equipment that has reached the end of its useful life. Running equipment to failure causes equipment to reach the end of its useful life prematurely. What many manufacturing personnel do not understand is the effect unreliability has on sales.

Screen Shot 2016-07-13 at 1.35.07 PMMaintenance professionals find it difficult to garner support of corporate executives who do not understand maintenance. However, these same executives have a very clear understanding of profit and loss. If they understand the effect unreliability has on sales and, therefore, profit, they will be much more inclined to support a comprehensive reliability initiative. It might surprise many maintenance professionals to learn that there is a mutual benefit to be derived from reliability: reduced maintenance costs and increased sales and revenue.

To understand this relationship, we must examine the basic business model. All for-profit businesses operate under the same equation:  PROFIT = SALES – COST. Equipment failures affect both sides of this equation.

Calculate the True Cost of Unreliability,” an article published in the February issue of Maintenance Technology examined the impact unreliability has on maintenance costs. In this article we will examine the effect unreliability has on sales.

A hypothetical plant will be used for purposes of calculations. You can apply these calculations to your own operations to develop an order-of-magnitude estimate of the impact unreliability has on sales and profitability.

For the calculation purposes, we will use a hypothetical plant that has a plant-replacement value (PRV) of $1 billion US, with a targeted return on capital employed (ROCE) of 30%. In other words, business stakeholders expect to realize $300 million in earnings before interest and taxes on their $1 billion investment. We will also assume that this plant operates at 70% capacity due to lack of sales.

Raise sales price

Sales revenue is driven by two key levers, price and volume. The higher the sales price per unit the higher the margin, the higher the sales revenue, and the greater the profit. Additionally, the more product you sell (sales volume), the higher the sales revenue and the greater the profit. So both sales price and sales volume determine the revenue garnered by the business. Unreliability has a very profound effect on those two factors. To understand the relationship between asset reliability and sales revenue in this equation we need to examine each component in more detail.

The price of a product is largely set by whatever price the market will bear. However, the market places a premium on quality. The highest sustainable product quality can only be produced through uninterrupted manufacturing. As assets become more reliable, manufacturers are able to produce consistently higher quality product, something customers value. This isn’t new. W. Edwards Deming espoused the virtues of product consistency more than a half century ago.

If a 5% price premium can be garnered from customer willingness to pay more for higher quality product, then the subsequent increase in sales revenue is calculable. Assuming the hypothetical plant had $500 million in sales during the reporting period, the increased revenue from a higher price enabled by higher-quality product would be an additional $25 million in sales revenue.

This increase in sales revenue was made simply by reducing and/or eliminating unplanned equipment failures. No additional capital was required, resulting in a direct increase in the return on capital employed and, more importantly, on profitability.

LINE ITEM: $25 million = The increase in revenue due to higher sales price for higher quality product derived from reducing and/or eliminating unreliability.

Increase capacity

A second sales-revenue benefit derived from the elimination and/or reduction of unreliability is garnered through a lower cost per unit (CPU) of production. By operating in a failure-free mode, manufacturers are able to increase throughput. When there are fewer production interruptions caused by equipment failures, more product is made over the same period of time.

For example, if the average production rate was 80 tons per day, including time lost to equipment failures, then a natural benefit derived by reducing and/or eliminating equipment failures would be an automatic increase in capacity. If one additional hour per day of production was gained, the subsequent increase in capacity would be 4%.

A 4% increase on $525 million in annual sales revenue would be worth an additional $21 million in sales revenue. As was the case with improved product quality, this increase in capacity was derived without any additional capital investment. Companies are always striving for increased sales by whatever means, but they inevitably expect to have to invest significant capital in a new production unit or to expand an existing production unit.

LINE ITEM: $21 million = The incremental sales gained through the incremental increase in production capacity derived from reducing and/or eliminating unreliability.

Increase sales margin

Additionally, a 5% reduction in the cost per unit derived by spreading costs, e.g., operational and energy costs, over a larger volume of product could be significant. This is effectively an increase in the sales margin of the product being sold. Using the aforementioned $500 million in annual sales, the benefit would be 5% of $500 million, or an additional $25 million in profit.

LINE ITEM: $25 million = The increase in profit caused by an increased sales margin gained by reducing the cost per unit derived from reducing and/or eliminating unreliability.

Admittedly, an argument against the aforementioned gain could be made. Just because you produce more product doesn’t mean that you can sell it. But let’s examine the primary means of competition in a capitalistic environment. Companies generally compete on price and/or on quality. By reducing and/or eliminating equipment failures, both of these factors are enhanced. If you have a higher quality product to offer, your competitive position is automatically strengthened. You can increase price to increase sales revenue and/or maintain the same price and increase sales volume by offering a higher quality product for the same price.

The gains illustrated above appear to be reasonable, so we’ll assume that we could potentially increase sales price and sales volume, thereby deriving a dual benefit from the reduction and/or elimination of unreliability.

Reduce maintenance

We must also consider that, with a reduction in unreliability, maintenance costs, typically the highest fixed cost in manufacturing, are substantially lowered. Maintenance costs are distributed across all production in the form of maintenance cost per unit of production. The net result of lower maintenance cost is therefore lower cost per unit of production. In a poorly performing operation, characterized by high unreliability and subsequent high maintenance cost, the benefit derived from reducing the maintenance cost per unit alone can be profound. Benchmark studies have shown that the difference between a best performer and a worst performer, relative to maintenance cost, can be exponential. In other words, a worst performer will spend exponentially more on maintenance per unit of production than a best performer.

In the process industry, the range of performance in maintenance cost as a percent of plant-replacement value (PRV) is from less than 1% for best performers to more than 15% for worst performers. For illustration purposes we will assume a 1% reduction in maintenance cost as a percent of PRV. We will assume maintenance costs were 3% of PRV, but have been reduced to 2% of PRV by implementing a robust condition-monitoring program that facilitates corrective action prior to catastrophic failure. The net increase in profit through reduced maintenance costs based on a PRV of $1 billion would be $10 million.

LINE ITEM: $10 million = The increase in profit gained by a reduction in maintenance cost derived from reducing and/or eliminating unreliability.

Extend turnaround frequency

Although it is not universally recognized, maintenance turnarounds are caused largely by unreliability. The primary driver for turnarounds is typically pressure-equipment inspection. But what if you used non-intrusive condition monitoring such that you eliminated the need to open equipment for visual inspection?

Far too many process plants still take annual turnarounds. In this era of advanced inspection technologies, that is inexcusable. Better-performing process plants have extended the frequency of their turnarounds out to 5 to 7 yr. Let us assume that the hypothetical plant still takes annual turnarounds that cause 21 days of lost production. If the turnaround frequency was extended out to 3 yr., with only a 7-hr. increase in duration, a net annualized increase in production of approximately 12 days would be realized.

If we conservatively calculated the value of each day of production, based on current production rates and sales prices, twelve additional days of production would net an additional $18 million in sales revenue.

LINE ITEM: $18 million = The increased sales revenue gained from 12 additional days of production derived from reducing and/or eliminating unreliability caused by annual turnarounds.

Increase production

The final potential gain we will examine is the 30% of production capacity that is not currently utilized, auspiciously because of a lack of sales. Claiming that no sales were lost due to unreliability is a self-fulfilling prophecy. As long as the manufacturer is not a sole source producer, additional sales were lost to competitors. If we go back to the benefits of the highest sustainable product quality and lowest sustainable unit cost of production, there would be no valid reason for not selling every unit of production. That additional 30% of production and subsequent sales is a game changer for the business. Using the original assumption of $500 million in annual sales, adding in the additional sales revenue from continuous production, and ignoring the quality premium, the net gain in sales revenue is an astounding $215 million.

LINE ITEM: $215 million = The increased sales revenue gained by running continuously, derived directly and indirectly through the reduction and/or elimination of unreliability.

There are arguably additional sales and revenue gains that can be derived through the reduction and/or elimination of unreliability. However, using the examples above we can see that a significant increase in sales and related revenue can be gained through reliable operation.

This is not an insignificant amount of sales revenue for any size organization. The business case for reliability is compelling! Although a hypothetical manufacturing site was used to illustrate the effect of unreliability on sales, the same calculations can be used to obtain an order-of-magnitude estimate of the value of lost sales due to unreliability for any plant. Plant management and corporate leaders need to understand the high cost of unreliability. All it takes is for someone to take the initiative and calculate the value for your operation. Once the true cost of unreliability has been exposed, garnering support for improved reliability should be easy! MT

Al Poling has more than 35 years of reliability and maintenance experience and is a Certified Maintenance and Reliability Professional (CMRP). His consultancy, RAM Analytics, is located in Houston. For more information, contact him at

Click here to download an ebook pdf containing this article and Al Poling’s February 2016 article “Calculate the True Cost of Unreliability”.


4:19 pm
July 13, 2016
Print Friendly

Troubleshoot Poor Machine Alignment


According to Tom Shelton of Richmond, VA-based VibrAlign Inc. (, there are a number of possible reasons that machinery-alignment results may not live up to expectations. Many, he says, can be eliminated in the pre-alignment phase.

To that end, Shelton and other VibrAlign experts and customers have compiled the following “evergreen” list of culprits that could cause the unintended movement affecting alignment outcomes. While these movement sources are different in many instances, they’re common in others.

As Shelton pointed out in a January 2016 blog post on the VibrAlign website, this list isn’t complete. “It’s intended,” he wrote, “to be used as a seed to help you find the source of pain in your alignment.”

To give credit where it’s due, he explains that the idea for the list originated with a student in a training class. He strongly recommends keeping a laminated copy of it with or near your site’s alignment tool(s).
— Jane Alexander, managing editor

Surrounding environment

  • Vibration caused by attached or nearby equipment
  • Interference in the laser beams, i.e., steam, condensation, rain, dust, anything that could reflect or refract a laser light.


  • Loose brackets or other components
  • Dirty lenses
  • Inappropriate measuring mode for conditions
  • Incorrect machine dimensions input on set-up
  • Laser beam broken by coupling or structure
  • Gross misalignment causing beam to run off of sensor
  • Gross misalignment causing coupling influences of the rotational centerlines.


  • Dirt or debris between shims
  • Bent shims
  • Shims against threads on bolt
  • Actual shim thickness different than stamped thickness (Tip: Shims larger than 0.025 in. should mic’d.)
  • Multiple people shimming, putting different shim thicknesses under the feet
  • Wrong size shims (Tip: Always use the shims that give the most contact between the base and the machine foot.)

Machinery components

  • Excessive bearing or component wear (Tip: Do a lift check to determine bearing wear.)
  • Coupling bore off center or skewed
  • Cracked or broken machine case or frame
  • Soft foot
  • Coupling wear
  • Coupling-insert wear
  • Incorrect coupling gap
  • Shaft or hub contacting opposite component shaft
  • Coupling binding due to poor or incorrect rough alignment
  • Pipe and/or conduit strain
  • Motor “belly” contacting base.

Machine base

  • Broken or crumbling base
  • Dirt and debris under machine or machine feet
  • Rusted base, bolts, or feet
  • Top thread of bolt hole in a threaded base pulled up, creating a bump
  • Stripped hold-down bolts or studs
  • Cupped washers (Tip: Hardened machine-base washers or Grade 8 are recommended.)
  • Improper base installation, i.e., hollow or warped base.

Alignment processes

  • Mandatory pre-alignment steps not completed or improperly executed, i.e., inadequate rough alignment, soft foot not corrected or an improper or too aggressive tightening sequence
  • Poor backlash management
  • Lack of training. MT

For more information on the topic of equipment alignment, including what to do “when all else fails,” and/or to add to this list of possible culprits, see Shelton’s complete “Troubleshooting Tips” post at

Tom Shelton, a technical trainer with VibrAlign, is a journeyman millwright/pipefitter who spent 16 years in the paper industry.


4:05 pm
July 7, 2016
Print Friendly

Change Your Game to Proactive

Basketball arena

Expanding the definition of reliability to include the human element can transform an organization’s culture and boost overall business performance.

By Jeff R. Dudley, HSB Solomon Associates LLC

Question: What’s a common denominator among the Beatles, Elvis Presley, Led Zeppelin, Jack Nicklaus, Michael Jordan, Wayne Gretzky, Steve Jobs, and Bill Gates?

Answer: The ability to change the game in their respective vocations or pursuits.

Game changers are entities that transform existing situations or activities in significant ways. They develop ideas or means of performing that completely alter the way a situation, profession, or even an industry develops. Each of aforementioned groups or individuals had a profound impact on their profession. At times, they appeared iconic, having a significant impact on the culture around them. Looking at them, you would label them as leaders, innovators, and proactive in their approaches.

When businesses exhibit these characteristics, they are thought to have great leadership and considered to be forward thinking or proactive. They’re also typically highly successful in their respective sectors.

Screen Shot 2016-07-07 at 10.29.45 AM

Culture continuum

All businesses develop a culture that defines them. The more leadership-focused an organization is, the more proactive it becomes. Figure 1 illustrates a culture continuum moving from highly reactive to highly proactive, highly manager-led to highly leader-led, and low reliability to high reliability. As an organization’s culture moves more to the right side of the chart, the opportunity for achieving game-changing performance increases. That’s good news. The bad news is that it’s still uncommon to find organizations moving toward and performing on the far right.

Unlike individual game changers, countless organizations that hope to achieve game-changing performance often lack the culture to sustain the dedication and commitment to excellence that will keep that performance going. Instead, they languish in a reactive, manager-led state. The biggest obstacle to their success is that it is simply easier to be complacent than to constantly strive for improvement.

Staying reactive leads to inevitable outcomes

By necessity, reactive organizations have to be manager-led—everybody has to be told what to do. Since seeking permission is standard operating procedure, things happen much slower. With risk-taking held in check, very little innovation occurs. As a result, such organizations become stuck in “this is how it has always been done” mode.

Following the same old path and holding risk in check, however, has another downside: Reactive work, coupled with lack of planning, leads to greater risk to employees and the business itself. Let’s examine some data that support this pattern.

The Solomon Associates, Dallas, “International Study of Plant Reliability and Maintenance Effectiveness” (RAM) study shows that third- and fourth-quartile (Q3 and Q4) performers are more reactive than first- and second-quartile (Q1 and Q2) performers. (All Solomon studies break results into performance quartiles with Q1 being the best and Q4 the worst.)

This research also shows that the majority of work done by companies with reactive cultures is significantly less planned and scheduled. Moreover, there’s an elevated potential for high-level emergency work that must to be completed. All of these factors potentially put employees at a higher risk. According to the compiled data, reactive organizations do approximately twice as much work as their proactive counterparts.

So how does the reactive culture put businesses at risk? Reactive cultures incur significantly more asset downtime and, as a result, are not capable of producing as much product as they could, therefore becoming less profitable. Reactive cultures are also less productive, from all aspects of running a business. They experience higher operating costs, compared with proactive cultures, and generally spend less time operating their assets and more time fixing them.

Another issue to consider has to do with the frequency and severity of environmental incidents. Solomon’s data indicate such incidents occur more frequently in a reactive culture and become more noteworthy than with proactive cultures. Obviously, these types of issues can have a negative impact on business sustainability.

In any organization, profitability is driven by asset availability and the cost to produce that availability, assuming a saleable product with sufficient standard margin. Reactive cultures typically have a strong cost focus, allowing cost to drive reliability. This type of environment creates a short-term focus on profitability and often results in de-capitalized assets and unsustainable profitability.

Performance improvements

As a culture moves to a proactive mindset, the organization becomes more resilient and possesses the capacity to turn the previously described negatives into positives—something we refer to as “LeadeReliablity.” This is where game-changing performance occurs.

As proactive organizations evolve, they become more leader-led. Administrative leaders trust that employees have been trained, know what to do in most situations, and will do it well. They use their leadership skills to teach others how to lead in their area of expertise, developing leadership traits in all employees. As more personnel apply their skills and ideas, innovation and improvement flourish. As the pattern continues, the organization transforms itself. Everyone begins to act like a leader, and performance, as a whole, improves dramatically.

Risk also is minimized because of the proactive nature. The opposite of what was previously discussed is true: Fewer employees are involved in every task and overtime is typically held to a minimum. Tasks are planned, scheduled, and completed on time, with little emergency work. Employee risk is minimized because most work and potential hazards are thought out ahead of time. These organizations typically have stellar safety results and performance.

Business risks are minimized as well because these organizations are reliable suppliers to their customers—and their customers know they can depend on them. As a result, these organizations are the ones other customers turn to when reactive organizations cannot meet their commitments. Proactive organizations create customer loyalty.

Proactive cultures focus on reliability first and allow cost to create the desired reliability, which sometimes means that, in the short-term, they spend more in a targeted fashion to address a specific issue. The outcome of this mindset is high reliability at the optimum cost to achieve it.

Game changing

While proactive cultures that continue on their reliability journey become game changers in all areas, the change in reliable performance is especially noteworthy. These organizations stand out because they have patiently and bravely followed a path that has made them significantly different from others. They excel in the following three areas:

Culture: Organizations that excel in the area of culture focus tirelessly on minimizing unplanned events, identifying the abnormal, and not allowing the abnormal to become normal. They plan, schedule, commit to, and complete tasks as planned and scheduled. They do not accept the status quo and know they can continuously improve.

Individuals in these types of organizations think of themselves as a “leaders” who can make a difference in what they do. By constantly focusing on “what could go wrong,” they don’t allow unplanned events to be disruptive. If and when problems do arise, they have already decided what to do about them. They believe they can learn from everything and take every opportunity to teach what they have learned. They are well trained and follow procedures, yet treat each procedure as a living document that can be improved. They have authority and freedom to act to address any situation. They use experts inside and outside the organization to reach solutions. Others desire to be a part of such organizations.

Reliability: Organizations that excel in this area consider reliability in broader terms than those with an asset focus. Their definition of reliability also deals with how personnel conduct their business and meet their commitments. They realize that, without reliable personnel, their assets don’t stand a chance. Because their focus is on a long-term commitment to reliable assets, however, such organizations deliver impressive results. Q1 performers consistently deliver more than 97% mechanical availability. More than 75% of their downtime is due to planned turnarounds.

Spending on Reliability and Maintenance: Cost control is not the driver for organizations that excel in this area. During their journeys, spending is optimized to deliver desired asset reliability. The culture has a positive impact on the amount of spending that occurs. A focus on proactive discovery, addressing abnormal conditions, planning, scheduling, and efficient completion of the work drives costs to the optimum level. As the culture continues to develop in such organizations, spending is driven down. Note that game changers are not the lowest-spending organizations. In fact, our research has shown that the lowest-spending operations typically have poor reliability and are normally Q3 performers).

Scoring big

If you want your organization to be the Michael Jordan of its industry, you must focus on reliability. Your definition of reliability will put employee-behavior at the forefront. Reliable human behavior creates reliable asset operation.

During the journey, you will focus on growing a reliable culture and target your spending. You understand you are not under any less constraint than your competition. But, because you are constantly becoming more reliable, you target your spending to eliminate future spending. As a result, you are constantly becoming more profitable than if you had not started this journey. You do not make promises to customers, employees, and stakeholders that you cannot keep.

As a result, you and your organization become a game changer in the area of reliably running your business and create customer loyalty, employee engagement, and sustainable profitability. MT

Jeff Dudley is a senior consultant with Dallas-based HSB Solomon Associates LLC ( He has spent more than 30 years in the reliability and maintenance arena. Contact him at

learnmore2Choose Reliability or Cost Control

Set Your Mind for Complete Reliability

Seven Steps to Culture Change

Match Attitude, Structure, to Change Culture


3:31 pm
July 6, 2016
Print Friendly

Do Employees Make Your Network Vulnerable?

Employees, most of the time innocently, can be the weakest part of a company cyber-security plan. Education is the key to strengthening that plan.

By Dennis Egen, Engine Room

Some of the largest and most damaging security breaches in history occurred in 2015. According to a May 2015 Ponemon Institute, Traverse City, MI, study, commissioned by IBM, the average total cost of a single corporate data breach was $3.79 million, an increase of 23% from 2013.

The breaches that received the most attention in recent years were those affecting millions, sometimes tens of millions, of consumers and their personal information: Ebay, , Target, Anthem, Premera Blue Cross, and the Federal Office of Personnel Management, to name a few.

But the manufacturing environment isn’t immune. In 2013, Symantec, a global cyber-security company, reported that manufacturing was the most targeted sector for cyber attacks, accounting for 24% of all targeted attacks. Theft of personal data isn’t the objective of the cyber attacks on manufacturers. Instead, the main security concerns in the manufacturing environment are intellectual property theft, data alteration, and outside interference in manufacturing processes.

Despite these threats, American manufacturers have not taken the most basic steps to secure their data from the single biggest threat to information security—their own employees.

It has been estimated that 60% of data compromises are caused by employees or insiders (freelancers, contractors, consultants). The vast majority of these breaches are unintentional.

Rogue employees

So what should be done to address this internal threat? First, recognize that, while most employee-caused data breaches are due to negligence or lack of proper data-security education, the potential actions of disgruntled employees must also be considered. Rogue employees, especially members of the IT team with access to network, data center, and administrative accounts, can severely compromise a manufacturer’s important data. Corporate vigilance can go a long way toward curbing this kind of activity. Notice telltale changes in employee behavior:

  • Is a usually reliable employee’s performance dropping?
  • Is an employee acting differently with colleagues?
  • Is a normally prompt employee now habitually arriving late to work?

Such vigilance may help identify potential harmful activity in action. But being proactive will, in the end, provide greater information security:

  • Perform an annual information security audit.
  • Identify all privileged accounts and credentials. Which users have access to what data?
  • Create attack models to identify exposure to insider threats and perform a damage assessment of these threats.
  • Closely monitor, control, and manage privileged credentials to prevent exploitation.
  • Control flow of inbound delivery methods.
  • Filter executable mail and web links.
  • Monitor and look for irregularities in outbound traffic.
  • Implement necessary protocols and infrastructure to track, log, and record privileged account activity.


Negligent or careless employees

Interestingly, one of the main factors in employee-caused data breaches is that potential outside hackers have changed the focus of their attacks. As companies have become more aware of external threats, they are improving their security procedures, implementing the latest security technologies, creating effective policies and employing greater vigilance. So, some outside attackers are shifting their focus and attacking enterprises through their employees by targeting less-secure home systems to gain access to manufacturer networks.

Aside from this possible shift in focus by some outside attackers, what’s behind the problem of negligent, careless employees? Workplace stress, multitasking, and long hours are contributors. But lack of education about information security and work policies are the main culprits. Most employees aren’t aware that several of their common work habits can easily put company data at risk.

Of course, there are accidental situations that can occur, such as leaving one’s laptop on the train or at a restaurant and mistakenly sending an email containing confidential information to the wrong person. But other potentially damaging practices can and should be prevented.

According to one provider of identity protection and fraud detection solutions, about 60% of users who have access to a company network use the same login credentials as on other non-company sites such as Facebook, Twitter, and LinkedIn. Since many targeted breaches begin with a phishing effort to grab users’ social media passwords, many inadvertently put confidential company login information right out for anyone to see.

Employees who want to finish some work at home may be putting sensitive files on a cloud-storage application such as Dropbox, which can lead to mixing and sharing of personal and corporate data.

Other common contributors to employee-caused security breaches include:

  • using weak passwords (containing fewer than eight characters; not employing upper and lower case letters; containing personal information such as birthdates, phone numbers, or addresses; using word or number patterns such as abcd or 12345)
  • not changing passwords frequently
  • visiting unauthorized websites
  • clicking on links from people they don’t know
  • failing to protect their laptop screens from prying eyes when working outside the office
  • using generic USB drives that are not encrypted or safeguarded by other means.

BYOD: a major culprit

Employees used to leave their work data at work. Now, mobile devices give employees access to corporate data anywhere, anytime. BYOD (bring your own device) has become a major risk for company data security. BYOD allows hackers to exploit poor employee security habits and weak passwords with the use of fake free Wi-Fi networks, fake login pages for popular sites, and phishing emails. A recent survey showed that 60% of employees either have no security or have stuck to the default settings for their mobile devices.

Here’s how the BYOD trend can have an impact on business:

Mobile phishing: Phishing can be used to attack mobile users as well as computers. Hackers can engineer an email to trick a user to open a malicious attachment or click on a link. The attacker can use the information gained from this phishing expedition to connect to the corporate IT network to steal data.

Being compromised by corporate-network attacks: Many outsider attacks take advantage of the fact that current network security solutions lack the visibility required to protect mobile devices once they leave the corporate network. Therefore they focus on mobile devices traversing public and private networks.

One of the basic ways to keep mobile devices safe and secure is to ensure that devices remain updated to the latest operating system version with full security protection. However, a more comprehensive approach is required. Here are some suggestions:

  • Employees will take information security seriously when they know it is an important focus of their company’s management. Make security a part of performance appraisals. Let employees know that IT security also means job security.
  • Create a written information security plan and share it with employees.
  • Educate employees about the need to change their work behavior in an age of increased BYOD. They should know about phishing, shoulder-surfing (an individual peering over the shoulder of an electronic-device user to acquire personal-access information), password protection, physical hardware security, and basic encryption.
  • Use software to manage mobile devices. This could be as simple as settings on the company exchange server, or more advanced use of mobile-device-management software such as Good or AirWatch.

There are a few don’ts:

  • Don’t use public Wi-Fi when performing client or sensitive corporate work.
  • Don’t click on any link in an email if you are not 100% sure of its source.
  • Don’t use work login information for social media.

Having programs and processes in place that include a mixture of training, policy, and technology are vital to addressing insider threats before they become a major issue. MT

Dennis Egen is president and founder of Engine Room (, a technology and security firm based in Philadelphia.

learnmore2If You Build It, Secure It: Think Like a Hacker

Engine Room, Philadelphia, helps clients mitigate risks by identifying and addressing vulnerabilities before they can be exploited.


10:08 pm
June 13, 2016
Print Friendly

Optimize Pump Performance

Group of powerful pumps in modern boiler-house

If pump systems are not optimized, entire processes suffer.

While pumps may be the foot soldiers of the process industries, their quiet dedication means they’re often ignored. That’s a risky business strategy for any site: Components break down, pumps run below optimal efficiency levels, and entire processes suffer. Experts at SKF (Gothenburg, Sweden, and Lansdale, PA) highlight several proven strategies to help optimize your plant’s pump-fleet performance.

Select the right bearing.
Bearings in centrifugal pumps support hydraulic loads imposed on the impeller, the mass of the impeller and shaft, and loads due to couplings and drive systems. They also keep the shaft axial and radial deflections within acceptable limits for the impeller and shaft seal. The bearings often will face high axial loads, marginal lubrication, and high operating temperatures and vibration, all while attempting to minimize friction. If uncontrolled, friction can result in power loss, excessive heat generation, increased noise or wear, and early bearing failure. To optimize a pump’s performance, be sure to evaluate the unit’s bearings (types, designs, and arrangements) in the context of their anticipated operating environment. Suitable bearings are available to satisfy even the most difficult centrifugal-pump applications.

Ensure proper lubrication.
Improper lubrication accounts for more than 30% of bearing failures. Good lubricants prevent metal-to-metal contact and undesired friction. The common methods for the effective lubrication of pump bearings include grease, oil bath, oil ring, and oil mist and air-oil. Oil mist generates the least amount of friction (allowing rotational speed to be based on the bearing design instead of lubrication limitations) and creates a positive pressure within the bearing housing (fending off invasive contaminants). Regardless of lubrication method, always specify lubricants according to the demands on vertical shafts and resistance to solids, pressure, temperatures, loads, and chemical attack.

Seal the system.
Bearing seals in centrifugal pumps retain lubricants or liquids, exclude contaminants, separate fluids, and confine pressure. The choice of a seal for centrifugal-pump bearings depends on the unique demands and operating conditions of the application. Keep in mind, though, that the bearing and sealing arrangement represents an integrated system. Dynamic radial seals generally are the best choice for centrifugal pumps. These designs create a barrier between surfaces in relative motion. Seal selection ultimately must be based on a thorough review of application parameters and environmental factors. For example, seals in pumping applications are often exposed to relatively constant pressure differentials. That makes pressure seals, with their pressurized seal cavities, the preferred choice.

Keep in mind that seals usually have a much shorter service life than the components they protect. Don’t fall into the common habit of scheduling seal replacement only at intervals dictated by other components, such as bearings.

Monitor equipment health.
Regular measurement and analysis of key physical parameters, such as vibration and temperature, can detect pump-system problems before they occur. Basic instruments can assess and report on vibration, temperature, and other parameters. More advanced tools include online surveillance systems and software that can deliver real-time data. Many problems will manifest as vibration, which is widely considered the best operating parameter to judge pump-train condition. Vibration can detect problems such as imbalance, misalignment, bearing oil-film instabilities, rolling bearing degradation, mechanical looseness, structural resonance, and a soft foundation.

Don’t overlook the pivotal role operators can play in pump reliability. They can serve as “eyes and ears” in the detection of equipment faults before problems escalate and also perform basic maintenance tasks. MT

SKF is a global supplier of bearings, seals, mechatronics, lubrication systems, and services that include technical support, maintenance-and-reliability services, engineering consulting, and training. For more information on motor bearings and other technologies and topics, visit


10:03 pm
June 13, 2016
Print Friendly

Use Thermal Imagers To Identify Motor Trouble

Making and cataloguing thermal images part of your regular preventive maintenance routine will help determine when and what motor components are varying from their baseline.

Making and cataloguing thermal images part of your regular preventive maintenance routine will help determine when and what motor components are varying from their baseline.

Infrared cameras, also called thermal imagers, can be important tools for troubleshooting motor problems, as well as for monitoring motor conditions for preventive maintenance. Infrared images reveal a motor’s heat signature, which can tell you a lot about its condition. The condition of motors, in turn, can play an important role in keeping plants up and running and their operating costs down.

According to experts at Fluke Corp., Everett, WA, here are some tips for scanning motors and drives with thermal imagers:

Build motor heat-signature profiles.
Capture good quality infrared images when the motors are running under normal operating conditions. That gives you baseline measurements of component temperatures. Make infrared images of all of the critical components, including motor, shaft coupling, motor and shaft bearings, and the gearbox. Note that when working with low electrical loads, the indications of a problem can be subtle. As a load increases, the temperature will increase. If a problem exists, expect greater temperature differences at higher loads.

Note nameplate information and hot spots.
A motor’s normal operating temperature should be listed on the nameplate. An infrared camera cannot see the inside of the motor, but the exterior surface temperature is an indicator of the internal temperature. If a motor is overheating, the windings will rapidly deteriorate. Every 50-deg. F increase in a motor’s windings, above the designed operating temperature, cuts the winding life by 50%, even if the overheating is only temporary. If a temperature reading in the middle of a motor housing comes up abnormally high, an IR image of the motor can tell you where the high temperature is coming from, i.e., windings, bearings, or coupling. If a coupling is running warm it is an indicator of misalignment.

Know the three primary causes of abnormal thermal patterns.

  • High-resistance contact surface, either a connection or a switch contact, often appears warmest at the spot of high resistance.
  • Load imbalances can appear equally warm throughout the phase or part of the circuit that is undersized/overloaded. Harmonic imbalances create a similar pattern. If the entire conductor is warm, it could be undersized or overloaded. Check the rating and the actual load to determine the cause.
  • Failed components typically look cooler than those that are functioning normally. The most common example is probably a blown fuse. In a motor circuit, this can result in a single-phase condition and the possibility of costly damage to the motor.

Create regular inspection routes and compare images.
It is a best practice to create a regular inspection regimen that includes making thermal images of all critical motor/drive combinations. Ideally, these images are made under identical operating conditions to have apples-to-apples comparisons. Comparing current state images with baseline images can help you determine whether a hotspot is unusual and also help you verify if any repairs undertaken were successful. A thermal imager can easily transfer images into software for cataloguing. Sharing can be invaluable in this effort. MT

For more information on thermal-imaging best practices, visit


9:57 pm
June 13, 2016
Print Friendly

Six Lubrication Myths Debunked

When it comes to machinery health, some lubrication myths are downright dangerous.

When it comes to machinery health, some lubrication myths are downright dangerous.

Despite years of concerted efforts by industry experts and suppliers, some dangerous lubrication myths continue to swirl around many maintenance operations. Motion Industries lubrication specialist Chris Kniestedt takes a down-and-dirty approach to debunk six of them.

Myth 1: All lubricating oils are the same.

From hydraulic fluids to gear lubricants to motor oils, each lubricant, be it synthetic or mineral-based, is uniquely formulated for its application with a specific viscosity; additive package; physical, chemical, and performance properties; and regulatory requirements. Various products may or may not be compatible with each other (see Myth 6).

Myth 2: If a little is good, more is better.

Take grease, for example. Over-greased bearings are a major cause of equipment failure. Blown seals and overheating are just two negative results of using too much grease. A general rule of thumb for normal- or high-speed machinery is that it’s better to err on the side of caution and to always check the OEM’s recommendations.

Overfilling gearboxes will also lead to problems, including failed shaft seals or increased operating temperatures. A gearbox that has too much oil will have to work harder to move through the lubricant, subsequently generating more heat or churning the oil into foam.

Myth 3: Blue, red, or black grease is better than white or clear grease.

Color is not a key factor in selecting grease for an application. There’s no standard for doing so. Instead, pay attention to base-oil viscosity (based on speed, load, and expected operating temperature), thickener type to mitigate incompatibility issues and consistency, and/or how well a product will pump at operating temperatures.

Myth 4: Tacky and stringy greases and oils offer better protection than non-tacky products.

It’s important to understand that lubricants are only 10- to 20-microns thick at the point of contact. Moreover, film thickness is a function of base-oil viscosity at operating temperature and speed (to a lesser degree, load). Thus, always use caution when applying tacky lubricants or greases with higher percentages of thickener at high operating speeds.

Myth 5: Food Grade (NSF H-1) products are never as good as Non-Food Grade (NSF H-2) products.

Advances in base-oil technology and additive chemistry have made Food Grade H1 products stronger than ever, particularly with synthetics. There are many applications where a correct, strong Food Grade H1 product will work as well as a non-Food Grade H2 mineral-oil-based equivalent.

Myth 6: All products are compatible.

Consider greases. In addition to their base oils and additive packages, greases are formulated with various thickeners (lithium, lithium complex, aluminum complex, calcium, polyurea, bentone, and silica gel), which aren’t necessarily compatible with each other. Always exercise caution when changing greases. Laboratory compatibility testing will clear up any doubts. If incompatibility exists between old and new products, purge bearings before changing to the new one. Oils aren’t always compatible either, especially with the new generation of synthetics. Finally, mixing Food Grade H1 lubricants with Non-Food Grade H2 will create contamination issues, which will cause you to lose H1 designation. MT

Chris Kniestedt is lubrication specialist for the San Francisco Division of Birmingham, AL-based Motion Industries. For more information visit