Archive | Automation Strategies


9:14 pm
January 13, 2017
Print Friendly

My Take: The Case for Change


By Jane Alexander, Managing Editor

How did you spend the past holiday season? I took some time to read and ponder a recent series of related articles and posts about the impact of automation on the human workforce by Claire Cain Miller, in “The Upshot” section of The New York Times. But that material was just the tip of an iceberg.

I also read most of the reader comments associated with those articles and posts, including (as of Tuesday, Jan. 3), the 550 regarding Miller’s feature published on Dec. 21, 2016. Titled “The Long-Term Jobs Killer is Not China. It’s Automation,” the piece seemed to have touched a lot of nerves. In it, among other things, the writer described the situations of two individuals, who, after losing their jobs to automation, have been unable to find new work in industry.

To her credit, the woman Miller quoted (who had actually lost two jobs as a result of automation) eventually enrolled in a computer class at Goodwill to improve her job prospects. For some reason, her strategy didn’t work. As she explained, “The 20- and 30-year-olds are more up to date on that stuff than we are because we didn’t have that when we were growing up.” She’s now on disability and living in a housing project.

The gentleman that Miller referenced, a former supervisor at an aluminum-extrusion operation (for a decade), lost his job to a robot about five years ago. Since then, he’s been scraping by with odd jobs. Unfortunately, as the article noted, despite the fact that many new factory jobs require technical skills, this person doesn’t own a computer and doesn’t want to.

These stories with their element of hopelessness and giving up touched my heart — greatly. Been there. Done that. Or, at least, fell into a similar, uncomfortable hole, from which I had to dig myself out. Twice. Thus, I line up with “Oscar,” another reader of Miller’s job-killer article, who posted the following comment: “The world changes. You change with it or get left behind. This has been true since long before we had robots and computers to worry about.”

Automation is changing the world and we should be prepared to change, too.

Automation is changing the world and we should be prepared to change, too.

Which gets me to thinking about something else I did during the holidays: I spent time on ordering copies of the book Frugal Innovation, by Navi Radjou and Jaideep Prabhus (2016, Economist Books, London), for several of my loved ones (old and not so old). I hope it gets them thinking as well — outside the box and elsewhere.

This 2016 CIM Management Book of the Year is full of insight, backed by case studies from developing countries on how, when resources are limited, businesses and individuals can turn adversity into success by tapping into the most abundant of all resources: human ingenuity. (In his Ted Talk on creative problem solving in the face of extreme limits, author Navi Radjou likens this ability to alchemy, i.e., turning something of little or no value into something of great value. And what’s not to like about that?)

Congratulations if you’ve received or read this book and/or if your own organization is already leveraging the management technique of frugal innovation (or “jugaad,” the Hindi term for an improvised solution born from ingenuity and cleverness). To paraphrase “Oscar,” the commenter on the previously referenced article from The New York Times, the world changes. We can change with it or be left behind.

I look forward to hearing about the experiences (make that successes) of all you never-give-up alchemists out there. MT


4:44 pm
January 3, 2017
Print Friendly

White Paper | ROI and the Connected Enterprise

170103itcwp2016 is done and internal debates with manufacturers and OEMs point to building business cases for Industrial Internet of Things (IIoT) initiatives. IT and OT suppliers are partnering to provide more holistic solutions for manufacturers, but internal metrics have to be in place for these new IIoT initiatives to be successful.

A new white paper from ThingWorx, a PTC company, “Quantifying the Return on Investment (ROI),” provides starting points for manufacturers on what key metrics are needed for measuring these projects. The paper includes three case applications and a deep dive into the business entities within an enterprise, such as assets, engineering. operations, services and sales.

>> Related Content | Partnerships Emerge as Manufacturers Eye IIoT Strategies 

The paper emphasizes a holistic look at IIoT and how the above entities are connected. For example, the first customer success story reveals these metrics from disparate business units: reduced mean time to repair (MTTR), reduced travel time for calls and a look at service calls for each problem resolved remotely.

From the white paper:

ThingWorx has interviewed customers, analyzed results, and found top- and bottom-line impacts that executives need to understand. The following sections share these finding and discuss what they mean for the enterprise. You will find an overview of the business metrics for IoT and the description of a framework to quantify the return on investment.

>> Click here to download the white paper

1601Iot_logoFor more IIoT coverage in maintenance and operations, click here! 


9:09 pm
December 20, 2016
Print Friendly

View from the Top

Keck Observatory uses a robust building-automation system to increase the reliability of advanced telescopes that astronomers use to hunt planets and dissect galaxies.


By Michelle Segrest,  Contributing Editor

When John Baldwin arrives at the W. M. Keck Observatory headquarters every day, it takes another two hours for him to drive to his office. The office sits 13,796 ft. above sea level on the summit of Mauna Kea, a dormant volcano on the island of Hawaii.

The W.M. Keck Observatory provides precise in-depth views of the universe through the two largest telescopes in the world. Photo by Andrew Richard Hara Photography © 2014

The W.M. Keck Observatory provides precise in-depth views of the universe through the two largest telescopes in the world. Photo by Andrew Richard Hara Photography © 2014

As he drives up the mountain, the altitude increases and the oxygen available to his body decreases. The altitude can cause him to feel lightheaded, fatigued, and dehydrated. It also can cause irritability and lack of concentration. These are normal working conditions for him and the two dozen other employees who work at the observatory’s summit location. The spectacular view makes it all worthwhile.

Even at almost 14,000 ft., Baldwin and the astronomical scientists from the Univ. of California and California Institute of Technology still need to look up to observe the universe with unprecedented power and precision. The Keck Observatory is home to the world’s largest and most scientifically productive optical and infrared telescopes. Each of the twin telescopes weighs 300 tons and operates with nanometer precision. The  primary mirrors are 10 meters in diameter, each composed of 36 hexagonal segments that work in concert as a single reflective surface.

As the summit superintendent, Baldwin’s job is to keep the telescopes, domes that shelter them, and the entire facility functioning. This includes chillers, air conditioners, hydraulics, pneumatics, cranes, pumps, gearboxes, motors, and other equipment. “It’s amazing how many systems need to work perfectly and in concert with each other for us to operate normally,” Baldwin said.

Originally from New Jersey, Baldwin moved to Hawaii after taking a cruise there with his family in 2003. He has worked with Keck since 2006 and daily tackles significant challenges inherent to the facility’s complexity and location.

A unique facility

The telescopes are not what most people consider to be typical telescopes.

“People don’t physically look through the telescopes,” Baldwin explained. “There are instruments on the telescope that have charge-coupled devices, like in a digital camera but much larger, and the computers do all the work. Scientists look at pictures of the galaxy, and can see the spectrum of light from the galaxy. By analyzing the spectrum, they can deduce a lot of data about a star or a galaxy. For example, they can determine what it is made of, how far away it is, how big it is, and how fast it is moving relative to the Earth. We use computers to guide the telescope to the target, then track the target for however long the exposure is needed.”

The data flow to astronomers who may not be on site. They can observe from the Kamuela headquarters location or even from California. People on site physically operate the telescopes for the astronomers. Some of the Keck observers are “planet hunters” who look at different stars and collect data to determine whether there is a planet orbiting around a star.

1216voiceboxSpecial challenges

The altitude is, by far, the biggest challenge for Baldwin and his team of six maintenance technicians, he said.

“A normal task that you would complete in one hour at headquarters might take up to two and a half hours at the summit,” Baldwin explained. “That is completely normal up here. Your body tires very quickly. It’s normal to feel lightheaded. If you’ve been working here for many years and your body is acclimated, you may still feel lightheaded, but you might not notice it.”

To combat the effects of the altitude, employees are required to stop for 30 minutes every day once they reach the 9,000-ft. mark while driving up the mountain. There is a visitor center at that point where they can have breakfast and allow their bodies to acclimate. Still, the effects of the altitude can be felt.

“You may not think as clearly as you would at sea level,” Baldwin continued. “If you are doing a complex computation or calculation, it’s always a good idea to call headquarters and ask an engineer to double-check your numbers. It’s easy to forget what you are doing while you are in the middle of a project. It’s also normal to feel very fatigued and temperaments can go sour quickly.”

Since the summit employees go home very night, they must acclimate their bodies to the change in altitude twice each day. The two-hour drive counts toward the 10-hr. shifts, reducing the number of working hours for each employee and creating another challenge unique to the facility.

To counteract this efficiency loss, the organization takes care to consistently upgrade the technical and scientific capabilities. This includes the maintenance infrastructure, Baldwin said. However, it also creates additional work, so a balance must be created.

“With constant upgrades, we have the same individuals doing the plumbing, electrical, steel work, fabrication, and also installation and implementation of the upgrades,” Baldwin explained.

Baldwin ensures that his six technicians are cross-trained, as he has been throughout his career. He began as an HVAC installation mechanic and later became a facilities technician. His experience includes knowledge about controls and programming, how electricity works, hydraulics, pumps, airflow, and how different fuels are handled.

“Behind all these beautiful optics, there is very complex machinery,” he said. “The telescope control and mirror systems are super complex and really cool.”

Baldwin’s team of six includes four mechanical technicians, one senior CNC machinist, and an industrial electrician.

A look at the K2 primary mirror inside one of Keck Observatory's unique telescopes. Photo by Andrew Cooper © 2007

A look at the K2 primary mirror inside one of Keck Observatory’s unique telescopes. Photo by Andrew Cooper © 2007

Building-automation system

One of the key upgrades was the installation of a robust building-automation system, a project that began in 2009 and was spearheaded by Baldwin and his colleague, Mark Devenot.

“At the time, we had a very old Trane Tracer system on one side of the facility only,” Baldwin said. “It was archaic, and no one knew how to run it. The philosophy was, don’t touch it because we don’t know how to fix it. We couldn’t even utilize it to modify how our equipment was running.”

The company upgraded to the Alerton (Lynnwood, WA) building-automation system. Baldwin and his team were trained by a local contractor on how to install the system and how to program it. It was first rolled out on two of the air conditioners that cooled the Keck 2 dome.

“It can be monitored remotely and programmed to do exactly what you want it to do at any given time,” said Baldwin. “Our management found lots of value in this, so we were able to roll it out to the rest of the facility.” The system provides several benefits. At such an inconvenient location, the remote-monitoring capabilities are critical. Technicians can log in from home and help the on-site staff troubleshoot a problem. More convenient usability and customization also provide payback.

“For example, our domes are kept at a very low temperature, typically close to zero degrees Celsius,” Baldwin explained. “We use a nighttime forecast to tell us what temperature to use so when we open the dome at night, the mirrors and telescope structure are approximately the same temperature as the nighttime temperature. This way the mirrors don’t fog. We also need to avoid the expanding or contracting of the steel due to temperature differences which affect the optical quality of the telescope. In the past, before this system was installed, the air conditioners that live in our domes were, for the most part, off-the-shelf air conditioners—they were either on or they were off. Once they cool to the forecasted temperature, they shut down.”

Baldwin said humidity is also a concern. “At night, if we have to close the domes because of some passing weather, if our dome temperature is satisfied, then the air conditioners would not run and the humidity would spike. When the weather would clear there would be a delay opening the domes because we would be at dewpoint on our mirrors,” Baldwin further explained. “We were able to customize the actual functionality of the air conditioner, add a dehumidification mode, and stage the air conditioning to hold tighter tolerances to temperatures in the dome. Because we can be below freezing up here, the cooling coils on the air conditioners collect a lot of ice. So we have to defrost them with electric defrost. We were able to upgrade the process and control the amount of energy we were using to defrost the cooling coils for maximum efficiency.”

At this altitude, Baldwin said the weather can change quickly. “We can be 45 degrees Fahrenheit and sunny, and then an hour later we might be in a total white-out blizzard and need to evacuate the mountain for safety concerns,” he said. “I have actually shoveled more snow here than I ever did in New Jersey.”

The building-automation system allowed the observatory to use its equipment more electrically efficiently while customizing the system to meet its special needs. “Now, during times of high humidity, we are able to keep our domes around 60% humidity, whereas before it could spike up to 85% or more, depending on the outside environment,” he said. “This bodes well for our optics. We’ve since expanded to monitor with some systems like hydraulics and air compressors. In some cases, they are completely controlling some of our chillers and air conditioners, and it is very helpful especially if there is a breakdown. The more we have connected to the building-automation system, the easier it is to troubleshoot remotely.”

The system has also helped to reduce electrical costs in an area where this is a critical concern. “With this system, we’ve been able to monitor our instantaneous power demand,” Baldwin said. “We can monitor a rise in our electrical usage and stage down non-critical equipment to maintain our instantaneous demand below a preset amount to keep electrical costs down.”

Maintenance philosophy

All of the general maintenance at Keck Observatory is done in-house. Only large additions and ancillary projects, such as crane inspections, are sourced outside the infrastructure team.

screen-shot-2016-12-20-at-2-49-11-pm“Our overall goal is to be totally reliability centered, but we are not there yet,” Baldwin said. “What we rely on is preventive and predictive maintenance. Preventive is our work-order program based on manufacturer specifications for our equipment. There is a frequency for each piece of equipment. We also have a pretty inclusive predictive-maintenance program. We do a lot of fluid analysis, thermography for electrical PMs, and we use ultrasound to give indications of broken wheel shafts that are part of our dome system. We recently started using an ultrasonic tool to monitor our bearing health and compressed-air leaks.”

Baldwin said his best practices begin with a twice-a-day facility walk-through.

“We spend about two hours walking through the entire facility, writing down pressures and temperatures, looking for leaks, and listening for odd sounds coming out of equipment,” he said. “And, because we do it so frequently, it’s one of our best indicators for something going wrong. The morning walk through is more in-depth. We use a clipboard to record pertinent information like how much water is in the facility. What are the temperatures of the chillers? What are the pressures of the pump that feeds the facility for cooling the instruments?

“Before we leave the facility, we do a visual walk through using the same route.”

He said the best advice he can give other managers driving maintenance and reliability programs is to question everything and educate your team with the most advanced tools possible. “The rewards are incredible,” he said. “If you can then educate upper management on why those tools have such value, everyone wins.” MT

Michelle Segrest has been a professional journalist for 27 years. She specializes in the industrial processing industries. If you know of a maintenance and/or reliability expert who is making a difference at their facility, send her an email at


9:09 pm
November 9, 2016
Print Friendly

Move From Raw Data to Smart Work

Manufacturers are flooded with data. Here’s some guidance to help you put that data into context, understand it, and make it work for you.

By Tim Sowell, Chief Architect, Software, Industry Solutions and Stan DeVries, Senior Director, Solutions Architecture, Schneider Electric

In today’s “flat world” of demand-driven supply, the need for agility is accelerating at a rapid rate. This is driving leading companies to transform their operational landscape (systems, assets, and culture) to a “smart work” environment. This move toward agility transforms thinking from a process-centric view to a product and production focus, requiring a dynamic, agile work environment between assets/machines, applications, and people. The paradigm shift from the traditional “lights-out manufacturing concept” of fully automated systems to an agile world of dynamically planned yet scheduled work requires:

• automated embedded intelligence and knowledge
augmented intelligence using humans to address dynamic change.


At the foundation of this shift is an environment in which a worker can have the mind space to understand the larger changing situation and make augmented, intelligent decisions and actions. To provide this, data are transformed naturally into operationalized information upon which decisions can be made, then combined with “tacit, applied knowledge,” providing incredible value when taking operational actions.

The explosion of information across industrial operations and enterprises creates a new challenge—how to find the “needles” of wisdom in the enormous “haystack” of information.

Listen to MT editorial director Gary L. Parr’s interview with one of the article’s authors.

One of the analogies for the value and type of information is a chain from data, through information and knowledge, to wisdom. In the industrial-manufacturing and processing context, it may be helpful to use the following definitions:

• Data: Raw information that varies in quality, structure, naming, type, and format.
Information: Enhanced data that has better quality and asset structure and may have more useable naming, types, and formats.
• Knowledge: Information with useful operational context, such as proximity to targets and limits, batch records, historical and forecasted trends, alarm states, estimated useful life, and efficiency.
Wisdom: Prescriptive advice and procedures to help achieve targets such as safety, health, environment, quality, schedule, throughput, efficiency, yields, and profits.


To illustrate this transformation, imagine driving along an unfamiliar California freeway in a GPS-enabled rental car:

• Data: The GPS knows that I am on a freeway, traveling at 80 mph.
Information: It is “situationally aware” that I am heading south on the I-405 freeway.
Knowledge: It works with other services to determine that 10 miles ahead the traffic is stopped, and provides me with a warning that I will be delayed due to a traffic hazard. It has combined traffic knowledge with my location, speed, and destination to provide timely, advanced decision-support knowledge that I can use to potentially take an action.
Wisdom: The GPS provides two alternate routes, giving me the time and characteristics of each route.

Without requiring me to take my eyes off the road and use an A-Z directory, I have been:

• warned ahead of time of an issue that could prevent me from reaching my destination on time
given two alternatives and the information necessary to realize my goal with either choice.

There is no reason why this same transformational journey from raw data to wisdom cannot apply to manufacturing operations.

Avoid the Pitfalls

Many companies stand on the edge of a data swamp that is growing quickly, with the Industrial Internet of Things and Smart Manufacturing providing access to an exponential level of additional data from their industrial value chain. This data influx can either bog down growth or, if leveraged to achieve proportional knowledge and wisdom, create a new level of operational agility. The Fourth Industrial Revolution (Industrie 4.0) provides a framework for leading this ubiquitous transformation. Major industrial organizations are now realizing the incredible value that can be extracted from data and are combining time, resources, and technologies, such as big data and machine learning, with a new evolution in operational culture to leverage this potential.

Those operating in manufacturing have been living for decades with vast amounts of data located in historians, equipment logs, and across their extended supply-chain network. Data, in and of itself, is not of much value. The same can be said for reams of paperwork that document best practices—it isn’t of much value sitting on a desk or in a document-management system.


Operational Data Management

We all talk about the ability to generate data from different devices. This can be valuable, provided there is some enterprise integration. But, can you really have effective information if there is no context?

The challenge is how to gain this context and then sustain it over several devices (things) without having a significant impact on those devices. In other words, how does one add, remove, and evolve devices? This requires an operational data-management system that is a “yellow pages” of the system, providing the context and relationship between devices and the operations.

An operational data-management system provides the ability to register new devices and data input, while maintaining the detail in the device, and then provide the bigger operational process alignment. This provides the association, which is alternate naming of that device so other applications can find and interact with it. Often, other systems and machines have a different outlook on the process and will use different naming and references for the device. An operational-data-management capability provides this association and ability to align many devices without a change in the underlying applications or devices.

From a data to information point of view, it provides the context needed to gather data and transform it into information, so that big-data analysis and other tools can be applied and convert that information into knowledge. Knowledge provides a pattern to ensure that contextualized operational data (production, quality, machine status) is integrated with templated collaboration activities, and ultimately broad value/supply-chain management.

Without this, companies have a real risk of gathering significant amounts of data and being unable to create the associated proportion of information, knowledge and, eventually, wisdom. Knowledge allows companies to put architectures and systems into place and gain contextualization while providing the plug-and-play ability for devices to be added to the solution.

The cost to store and share data has dropped significantly, and a simplistic expectation is that, although storage is growing by a factor of millions in only a few years, the pattern illustrated in Fig. 4 evolves.


Although the pattern might seem to be logical, it is actually a nightmare, because it becomes much harder to discover and translate knowledge and wisdom from another operation, especially in another location, to the local needs. But there is a solution.

To understand the problem better, let’s consider that knowledge includes context. This context begins with local details, including time, location, process or machinery configuration, raw materials, energy, and products being processed or produced. It is already valuable to have wisdom to achieve and sustain best performance for the community, customers, and the corporation. This local context only needs to know its immediate information, if it has enough wisdom.

Now let’s consider what happens when a single site, a fleet of similar sites, or an enterprise have numerous similar operations. How can local wisdom be enhanced by using wisdom from other operations? Solving this problem is important for operations transformation, such as operating physical assets as one (in a chain or as peers) or by supporting multiple operations with a flexible team of remote experts.

One approach to solving the knowledge proliferation problem is to take advantage of a methodology used in distributed databases, where a technique called “federated information” is used. This technique is especially valuable in industrial operations-management architectures. Federated information does not change the local information’s naming or structure, but provides multiple translations, across the database, for multiple similar structures and for multiple contexts such as what financial, technical support, scheduling, quality, and other functions require. It is an alternative to the fragility and complexity of attempting to force a uniform and encompassing naming and structure in an attempt to satisfy all applications and users.


The same approach can be applied for wisdom. Currently, hobbyists and enthusiasts around the world share wisdom for restoring cars, making furniture, playing a musical instrument, and gardening, as examples. Anyone with no experience at all can ask “Where do I get started?” and most respondents will provide some type of advice. In the same forum, experts can share wisdom that is valuable and understandable by them at their level of experience. This wisdom is extremely decentralized, and the experts are providing the local and regional translations.

In the industrial-operations environment, federating wisdom is partially automated by expanding the local context, including information about adjacent operations, about the chain or peers if these operations are being managed as one, and then knowledge is expanded by applying the context of group targets and performance.

Some enterprises have hundreds or as many as tens of thousands of similar operations supported by dozens or fewer experts. Discovery of wisdom is greatly enhanced by maintaining an architecture that enhances local context without modifying or attempting to force burdensome structures on local operations.

Empowerment through Wisdom

So how does operational intelligence/industrial analytics and the movement to wisdom relate? They are different, but all are related to empowerment of an operational workforce to make earlier decisions and take informed actions. One of the big drivers for platforms is to manage variance. We talk about supervisory, MES, information and simulation platforms but we also must have a people platform that covers:

• collaboration between people
activity hosting, including embedded information/knowledge and associated actions
transformation of information to situation awareness for the user who is interested/ interacting
management of operational work between team members

This people platform will mitigate workforce turnover by abstracting the different skill and experience levels with embedded applied knowledge (wisdom), so the experience is now in the system. This is a key concept for operational transformation.

Industrial analytics provide the shift from the past through the present and into the future, based upon high-fidelity models gained from experience. It provides a new dimension to worker tools and transforms the decisions that they are about to make. Industrial analytics combine the future, providing answers to what will happen with the recommended actions to take.

This also provides the answer to “What should I do next?” with experience, forethought, and understanding. Operational intelligence furthers decision making by providing screens/presentations of the situation or known questions with context and awareness.

Operational intelligence provides the worker with an understanding of “now,” where he/she is, and what the future holds, simply and clearly. Increasingly, there is demand for this type of operational window and views. It is not analysis but practical information around the current situation and immediate future. Just a simple view of the task or question provides the clear awareness and actionable answers.

Are these different experiences? No, they are all functional value expansions on each other, and should be seen as building blocks on the road to providing an operational execution knowledge platform, with built-in experience. In other words, they provide a foundation for absorbing turnover and transition in the workforce while maintaining operational consistency and efficiency.

There is a journey to smart work that organizations are now following, much like the first continuous-improvement initiatives that began nearly 50 years ago, such as Lean, Six Sigma, and TQM. Operational execution puts it in the systems and culture that enable proportional growth in knowledge and wisdom so that they can address the dynamic world of smart work. The only difference is that operational data-driven systems can now be a part of these continuous-improvement strategies.

Manufacturing is in a constant drive to improve performance, and transformation of work has become the main method to achieve and sustain this. Higher capacity or more efficient machinery and processes aren’t sufficient anymore. Manufacturers with agile and cyclical operations need a method to remain cost competitive during the lower throughput periods, yet remain responsive enough to take full advantage of high throughput or high margin conditions.

Implementing systems that transform work using higher value information and reliability change when, where, and/or how users make decisions, and is the foundation for this next level of improvement.

Operational transformation through smart work is a journey, and technology is only one of the key elements. The user culture must adapt, similar to the previous waves of quality, safety, health, and environmental improvements. The journey advances with work-process improvements, as applied to sections of a site or an entire site. Existing software must be assessed in terms of delivering knowledge and wisdom and supporting mobile and traveling workers, with the goal of significantly reducing the skill and effort required to maintain them. The journey is worthwhile, practical, and essential for manufacturers not only to stay competitive, but also to thrive.

Tim Sowell is vice president of Software System Strategy at Schneider Electric, Lake Forest, CA. In this role, he leads the direction and strategy for the company’s Wonderware software portfolio. Stan DeVries is senior director, Solutions Architecture at Schneider Electric Software. He works with customers to implement innovative, reproducible data-architecture patterns and reference architectures.


7:41 pm
September 12, 2016
Print Friendly

SSR or EMR? Select the Right Relay

Solid-state and electromechanical relays are not necessarily interchangeable. Evaluate your application before deciding which to use.

Solid-state and electromechanical relays are not necessarily interchangeable. Evaluate your application before deciding which to use.

randmSolid-state relays (SSRs) are replacing electromechanical relays (EMRs) in many applications across industry. There are several reasons why, including their long life, low noise, compact size, lack of moving parts, and total absence of arcing. These advantages make SSRs a popular choice for applications involving repetitive operations or fast turn-on/turn-off times, or in areas that require minimal electrical noise.

So, what types of SSR or EMR relays are right for the various applications in your plant? Automation professionals at Opto 22 (, in Temecula, CA, provide some selection guidelines.

Use SSRs in applications that require:

Repetitive operation cycles. Such applications include lights and electric heaters. SSRs have no mechanical components to wear out and no failure mode related to the number of operation cycles.

Minimal electrical noise. SSRs greatly minimize electrical noise because they turn on and off when voltage is zero in the AC cycle. Conversely, most EMRs turn on and off at any point in the AC cycle, which means they can generate significant voltage spikes, causing electrical noise that can affect other devices in the area.

High-speed timing. SSR turn-on times are highly predictable, while times for a mechanical relay vary based on the nature of the device and the environment.

Consider EMRs in applications that require:

High starting loads. Such applications include motors and transformers. SSRs are more sensitive to voltage transients than EMRs. If a relay gets hit hard enough a sufficient number of times, even SSRs with good transient protection will degrade or fail. This makes SSRs less ideal for driving highly inductive electromechanical loads, such as some solenoids and motors.

Operation in high-temperature environments. SSRs become less efficient as the relay temperature rises. The current rating for an SSR is de-rated, or reduced, based on the ambient temperature. EMRs are not affected in the same way.

Zero leakage current. In the “off” state, an SSR will exhibit a small amount of leakage current—typically a few mA. Because EMRs are mechanical, they do not leak current. MT

Special SSR Concerns

According to Opto 22’s (Temecula, CA) automation experts, in the use of solid-state relays (SSRs), two factors inherent to semiconductor-based relays require special attention:

Leakage current. When in the “off” state, an SSR will exhibit a small amount of leakage current, typically a few mA. It’s slight, but this current can keep some loads from turning off, especially in high-impedance applications such as small solenoids or neon lamps, that have relatively small “hold in” currents. When SSRs that switch high voltages are electrically open, leakage current can still cause their circuits to produce potentially troublesome voltages on the outputs. These issues can usually be addressed by placing a power resistor, sized for 8 to 10 times the rated maximum leakage current for the SSR, in parallel with the load.

Operational-temperature limits. Semiconductor-based relays become less efficient as their temperature increases. Thus, the current rating for an SSR is de-rated, or reduced, based on the ambient temperature. Since SSRs also generate heat in the “on” position, heat management is vital.

—Jane Alexander, Managing Editor


9:19 pm
June 20, 2016
Print Friendly

White Paper | How To Design an Industrial Internet Architecture

Source: Industrial Internet Consortium

Source: Industrial Internet Consortium


Interoperability has been the “mantra” in manufacturing for some time, but management needs more resources for fully-realized IIoT. The industrial internet depends on interoperability and that’s why this reference paper on industrial architecture can be a valued asset in developing plant or process manufacturing strategies. The Industrial Internet Consortium recently released this Industrial Internet Reference Architecture white paper and it provides multiple points-of-view for the enterprise: connectivity, functional, implementation, safety, communication security, data distribution, secure storage and integrations best practices.

Chapter 13 discusses edge networking principles and recommends a blueprint for data reduction techniques, along with other best practices with storage. Contributors include a who’s who of technology and manufacturer suppliers, such as ABB, GE, SAP, IBM, RTI, Fujitsu, Intel, Micron, and AT&T, to name a few.

Download the White Paper >>


8:51 pm
February 8, 2016
Print Friendly

IoT Offers Reliability Solutions

grant gerke

By Grant Gerke, Contributing Editor

In my coverage of the manufacturing and process industries for the past 15 years, I’ve seen plenty of marketing buzzwords and campaigns come and go. Mechatronics, Sustainability, NextGen Manufacturing, Security 2.0 and, of course, Internet of Things (IoT) are just a few of the recent ones. However, IoT is truly a transformative change for manufacturing and, with it, maintenance and reliability.

This year, Maintenance Technology magazine will start leading our readers through the forest of buzzwords and content to deliver real insights into how your maintenance team can benefit from IoT technology. This bi-monthly column is the gateway to a steady stream of IoT content at Our online destination will include podcast interviews with subject-matter experts, application insights, video reviews, and content from leading experts.   

IoT is nothing new for maintenance teams, with third-party services already playing a huge role in operations and, in turn, more connected machines and systems. Machine analytics made possible by ubiquitous sensors, robust networks, and standard interfaces create new opportunities and solutions for enterprises. This isn’t a marketing campaign for the next couple of years, it’s a structural change.

One example is remote vibration analysis for large enterprises as they try to consolidate resources across multiple plants. In a 2015 post on the Emerson Process Experts blog, Jim Cahill cited a power-producer application in which personnel “remotely monitored their rotating machinery to improve reliability and prevent disruption for their customers.”

A North American power company used Emerson’s machinery health monitors for critical machines in three different facilities and tied them back to its predictive-maintenance server. For non-critical machinery, the maintenance team uses portable analyzers to gather information (things) and then uploads the data to predictive-maintenance software. Using the tools, maintenance activities are performed jointly by specialists at the company and Emerson Process Management, St. Louis.

1602iot01p.jpegThe solution allows plant and enterprise management, with accredited security credentials, to observe key indicators from a PC, smartphone, or tablet. Smart-alarm features are also included for critical equipment. “If vibration exceeds a predetermined alarm, then signature and waveform data are immediately saved for analysis,” according to the blog post. A yellow or red indication appears on a device’s screen and provides “specific points and parameters in the alarm.” 

This is but one example of Internet of Things in action. Suppliers are just beginning to realize better ways to handle more data points in the factory or field.

Working on an article about a manufacturing standard for multinational companies a couple years ago, I stumbled across the “Internet of Things Strategic Research Roadmap,” produced by the IoT European Research Cluster.

This groundbreaking 50-page research paper provided a comprehensive and structural view of IoT in 2011, for manufacturing and consumer applications. It’s interesting that the paper includes a passage about the year 2015: “By 2015, wirelessly networked sensors in everything will form a new Web. But it will only be of value if the ‘terabyte torrent’ of data it generates can be collected, analyzed, and interpreted.”

As we can see, that torrent of data has arrived, and collecting, analyzing, and interpreting data is a major challenge. Big changes are never easy in any walk of life, but keep visiting for vital IoT applications and insight. MT

Grant Gerke is a business writer and content marketer in the manufacturing, power, and renewable-energy space. He has 15 years of experience covering the industrial and field-automation areas and has witnessed major manufacturing developments in the oil and gas, food, beverage, and power industries.