Archive | Automation Strategies

225

9:09 pm
November 9, 2016
Print Friendly

Move From Raw Data to Smart Work

Manufacturers are flooded with data. Here’s some guidance to help you put that data into context, understand it, and make it work for you.

By Tim Sowell, Chief Architect, Software, Industry Solutions and Stan DeVries, Senior Director, Solutions Architecture, Schneider Electric

In today’s “flat world” of demand-driven supply, the need for agility is accelerating at a rapid rate. This is driving leading companies to transform their operational landscape (systems, assets, and culture) to a “smart work” environment. This move toward agility transforms thinking from a process-centric view to a product and production focus, requiring a dynamic, agile work environment between assets/machines, applications, and people. The paradigm shift from the traditional “lights-out manufacturing concept” of fully automated systems to an agile world of dynamically planned yet scheduled work requires:

• automated embedded intelligence and knowledge
• 
augmented intelligence using humans to address dynamic change.

1116schneider1

At the foundation of this shift is an environment in which a worker can have the mind space to understand the larger changing situation and make augmented, intelligent decisions and actions. To provide this, data are transformed naturally into operationalized information upon which decisions can be made, then combined with “tacit, applied knowledge,” providing incredible value when taking operational actions.

The explosion of information across industrial operations and enterprises creates a new challenge—how to find the “needles” of wisdom in the enormous “haystack” of information.

Listen to MT editorial director Gary L. Parr’s interview with one of the article’s authors.

One of the analogies for the value and type of information is a chain from data, through information and knowledge, to wisdom. In the industrial-manufacturing and processing context, it may be helpful to use the following definitions:

• Data: Raw information that varies in quality, structure, naming, type, and format.
• 
Information: Enhanced data that has better quality and asset structure and may have more useable naming, types, and formats.
• Knowledge: Information with useful operational context, such as proximity to targets and limits, batch records, historical and forecasted trends, alarm states, estimated useful life, and efficiency.
• 
Wisdom: Prescriptive advice and procedures to help achieve targets such as safety, health, environment, quality, schedule, throughput, efficiency, yields, and profits.

1116schneider2

To illustrate this transformation, imagine driving along an unfamiliar California freeway in a GPS-enabled rental car:

• Data: The GPS knows that I am on a freeway, traveling at 80 mph.
• 
Information: It is “situationally aware” that I am heading south on the I-405 freeway.
• 
Knowledge: It works with other services to determine that 10 miles ahead the traffic is stopped, and provides me with a warning that I will be delayed due to a traffic hazard. It has combined traffic knowledge with my location, speed, and destination to provide timely, advanced decision-support knowledge that I can use to potentially take an action.
• 
Wisdom: The GPS provides two alternate routes, giving me the time and characteristics of each route.

Without requiring me to take my eyes off the road and use an A-Z directory, I have been:

• warned ahead of time of an issue that could prevent me from reaching my destination on time
• 
given two alternatives and the information necessary to realize my goal with either choice.

There is no reason why this same transformational journey from raw data to wisdom cannot apply to manufacturing operations.

Avoid the Pitfalls

Many companies stand on the edge of a data swamp that is growing quickly, with the Industrial Internet of Things and Smart Manufacturing providing access to an exponential level of additional data from their industrial value chain. This data influx can either bog down growth or, if leveraged to achieve proportional knowledge and wisdom, create a new level of operational agility. The Fourth Industrial Revolution (Industrie 4.0) provides a framework for leading this ubiquitous transformation. Major industrial organizations are now realizing the incredible value that can be extracted from data and are combining time, resources, and technologies, such as big data and machine learning, with a new evolution in operational culture to leverage this potential.

Those operating in manufacturing have been living for decades with vast amounts of data located in historians, equipment logs, and across their extended supply-chain network. Data, in and of itself, is not of much value. The same can be said for reams of paperwork that document best practices—it isn’t of much value sitting on a desk or in a document-management system.

1116schneider3

Operational Data Management

We all talk about the ability to generate data from different devices. This can be valuable, provided there is some enterprise integration. But, can you really have effective information if there is no context?

The challenge is how to gain this context and then sustain it over several devices (things) without having a significant impact on those devices. In other words, how does one add, remove, and evolve devices? This requires an operational data-management system that is a “yellow pages” of the system, providing the context and relationship between devices and the operations.

An operational data-management system provides the ability to register new devices and data input, while maintaining the detail in the device, and then provide the bigger operational process alignment. This provides the association, which is alternate naming of that device so other applications can find and interact with it. Often, other systems and machines have a different outlook on the process and will use different naming and references for the device. An operational-data-management capability provides this association and ability to align many devices without a change in the underlying applications or devices.

From a data to information point of view, it provides the context needed to gather data and transform it into information, so that big-data analysis and other tools can be applied and convert that information into knowledge. Knowledge provides a pattern to ensure that contextualized operational data (production, quality, machine status) is integrated with templated collaboration activities, and ultimately broad value/supply-chain management.

Without this, companies have a real risk of gathering significant amounts of data and being unable to create the associated proportion of information, knowledge and, eventually, wisdom. Knowledge allows companies to put architectures and systems into place and gain contextualization while providing the plug-and-play ability for devices to be added to the solution.

The cost to store and share data has dropped significantly, and a simplistic expectation is that, although storage is growing by a factor of millions in only a few years, the pattern illustrated in Fig. 4 evolves.

1116schneider4

Although the pattern might seem to be logical, it is actually a nightmare, because it becomes much harder to discover and translate knowledge and wisdom from another operation, especially in another location, to the local needs. But there is a solution.

To understand the problem better, let’s consider that knowledge includes context. This context begins with local details, including time, location, process or machinery configuration, raw materials, energy, and products being processed or produced. It is already valuable to have wisdom to achieve and sustain best performance for the community, customers, and the corporation. This local context only needs to know its immediate information, if it has enough wisdom.

Now let’s consider what happens when a single site, a fleet of similar sites, or an enterprise have numerous similar operations. How can local wisdom be enhanced by using wisdom from other operations? Solving this problem is important for operations transformation, such as operating physical assets as one (in a chain or as peers) or by supporting multiple operations with a flexible team of remote experts.

One approach to solving the knowledge proliferation problem is to take advantage of a methodology used in distributed databases, where a technique called “federated information” is used. This technique is especially valuable in industrial operations-management architectures. Federated information does not change the local information’s naming or structure, but provides multiple translations, across the database, for multiple similar structures and for multiple contexts such as what financial, technical support, scheduling, quality, and other functions require. It is an alternative to the fragility and complexity of attempting to force a uniform and encompassing naming and structure in an attempt to satisfy all applications and users.

1116schneider5

The same approach can be applied for wisdom. Currently, hobbyists and enthusiasts around the world share wisdom for restoring cars, making furniture, playing a musical instrument, and gardening, as examples. Anyone with no experience at all can ask “Where do I get started?” and most respondents will provide some type of advice. In the same forum, experts can share wisdom that is valuable and understandable by them at their level of experience. This wisdom is extremely decentralized, and the experts are providing the local and regional translations.

In the industrial-operations environment, federating wisdom is partially automated by expanding the local context, including information about adjacent operations, about the chain or peers if these operations are being managed as one, and then knowledge is expanded by applying the context of group targets and performance.

Some enterprises have hundreds or as many as tens of thousands of similar operations supported by dozens or fewer experts. Discovery of wisdom is greatly enhanced by maintaining an architecture that enhances local context without modifying or attempting to force burdensome structures on local operations.

Empowerment through Wisdom

So how does operational intelligence/industrial analytics and the movement to wisdom relate? They are different, but all are related to empowerment of an operational workforce to make earlier decisions and take informed actions. One of the big drivers for platforms is to manage variance. We talk about supervisory, MES, information and simulation platforms but we also must have a people platform that covers:

• collaboration between people
• 
activity hosting, including embedded information/knowledge and associated actions
• 
transformation of information to situation awareness for the user who is interested/ interacting
• 
management of operational work between team members
• 
notifications.

This people platform will mitigate workforce turnover by abstracting the different skill and experience levels with embedded applied knowledge (wisdom), so the experience is now in the system. This is a key concept for operational transformation.

Industrial analytics provide the shift from the past through the present and into the future, based upon high-fidelity models gained from experience. It provides a new dimension to worker tools and transforms the decisions that they are about to make. Industrial analytics combine the future, providing answers to what will happen with the recommended actions to take.

This also provides the answer to “What should I do next?” with experience, forethought, and understanding. Operational intelligence furthers decision making by providing screens/presentations of the situation or known questions with context and awareness.

Operational intelligence provides the worker with an understanding of “now,” where he/she is, and what the future holds, simply and clearly. Increasingly, there is demand for this type of operational window and views. It is not analysis but practical information around the current situation and immediate future. Just a simple view of the task or question provides the clear awareness and actionable answers.

Are these different experiences? No, they are all functional value expansions on each other, and should be seen as building blocks on the road to providing an operational execution knowledge platform, with built-in experience. In other words, they provide a foundation for absorbing turnover and transition in the workforce while maintaining operational consistency and efficiency.

There is a journey to smart work that organizations are now following, much like the first continuous-improvement initiatives that began nearly 50 years ago, such as Lean, Six Sigma, and TQM. Operational execution puts it in the systems and culture that enable proportional growth in knowledge and wisdom so that they can address the dynamic world of smart work. The only difference is that operational data-driven systems can now be a part of these continuous-improvement strategies.

Manufacturing is in a constant drive to improve performance, and transformation of work has become the main method to achieve and sustain this. Higher capacity or more efficient machinery and processes aren’t sufficient anymore. Manufacturers with agile and cyclical operations need a method to remain cost competitive during the lower throughput periods, yet remain responsive enough to take full advantage of high throughput or high margin conditions.

Implementing systems that transform work using higher value information and reliability change when, where, and/or how users make decisions, and is the foundation for this next level of improvement.

Operational transformation through smart work is a journey, and technology is only one of the key elements. The user culture must adapt, similar to the previous waves of quality, safety, health, and environmental improvements. The journey advances with work-process improvements, as applied to sections of a site or an entire site. Existing software must be assessed in terms of delivering knowledge and wisdom and supporting mobile and traveling workers, with the goal of significantly reducing the skill and effort required to maintain them. The journey is worthwhile, practical, and essential for manufacturers not only to stay competitive, but also to thrive.

Tim Sowell is vice president of Software System Strategy at Schneider Electric, Lake Forest, CA. In this role, he leads the direction and strategy for the company’s Wonderware software portfolio. Stan DeVries is senior director, Solutions Architecture at Schneider Electric Software. He works with customers to implement innovative, reproducible data-architecture patterns and reference architectures.

280

7:41 pm
September 12, 2016
Print Friendly

SSR or EMR? Select the Right Relay

Solid-state and electromechanical relays are not necessarily interchangeable. Evaluate your application before deciding which to use.

Solid-state and electromechanical relays are not necessarily interchangeable. Evaluate your application before deciding which to use.

randmSolid-state relays (SSRs) are replacing electromechanical relays (EMRs) in many applications across industry. There are several reasons why, including their long life, low noise, compact size, lack of moving parts, and total absence of arcing. These advantages make SSRs a popular choice for applications involving repetitive operations or fast turn-on/turn-off times, or in areas that require minimal electrical noise.

So, what types of SSR or EMR relays are right for the various applications in your plant? Automation professionals at Opto 22 (opto22.com), in Temecula, CA, provide some selection guidelines.

Use SSRs in applications that require:

Repetitive operation cycles. Such applications include lights and electric heaters. SSRs have no mechanical components to wear out and no failure mode related to the number of operation cycles.

Minimal electrical noise. SSRs greatly minimize electrical noise because they turn on and off when voltage is zero in the AC cycle. Conversely, most EMRs turn on and off at any point in the AC cycle, which means they can generate significant voltage spikes, causing electrical noise that can affect other devices in the area.

High-speed timing. SSR turn-on times are highly predictable, while times for a mechanical relay vary based on the nature of the device and the environment.

Consider EMRs in applications that require:

High starting loads. Such applications include motors and transformers. SSRs are more sensitive to voltage transients than EMRs. If a relay gets hit hard enough a sufficient number of times, even SSRs with good transient protection will degrade or fail. This makes SSRs less ideal for driving highly inductive electromechanical loads, such as some solenoids and motors.

Operation in high-temperature environments. SSRs become less efficient as the relay temperature rises. The current rating for an SSR is de-rated, or reduced, based on the ambient temperature. EMRs are not affected in the same way.

Zero leakage current. In the “off” state, an SSR will exhibit a small amount of leakage current—typically a few mA. Because EMRs are mechanical, they do not leak current. MT

Special SSR Concerns

According to Opto 22’s (Temecula, CA) automation experts, in the use of solid-state relays (SSRs), two factors inherent to semiconductor-based relays require special attention:

Leakage current. When in the “off” state, an SSR will exhibit a small amount of leakage current, typically a few mA. It’s slight, but this current can keep some loads from turning off, especially in high-impedance applications such as small solenoids or neon lamps, that have relatively small “hold in” currents. When SSRs that switch high voltages are electrically open, leakage current can still cause their circuits to produce potentially troublesome voltages on the outputs. These issues can usually be addressed by placing a power resistor, sized for 8 to 10 times the rated maximum leakage current for the SSR, in parallel with the load.

Operational-temperature limits. Semiconductor-based relays become less efficient as their temperature increases. Thus, the current rating for an SSR is de-rated, or reduced, based on the ambient temperature. Since SSRs also generate heat in the “on” position, heat management is vital.

—Jane Alexander, Managing Editor

46

9:19 pm
June 20, 2016
Print Friendly

White Paper | How To Design an Industrial Internet Architecture

Source: Industrial Internet Consortium

Source: Industrial Internet Consortium

 

Interoperability has been the “mantra” in manufacturing for some time, but management needs more resources for fully-realized IIoT. The industrial internet depends on interoperability and that’s why this reference paper on industrial architecture can be a valued asset in developing plant or process manufacturing strategies. The Industrial Internet Consortium recently released this Industrial Internet Reference Architecture white paper and it provides multiple points-of-view for the enterprise: connectivity, functional, implementation, safety, communication security, data distribution, secure storage and integrations best practices.

Chapter 13 discusses edge networking principles and recommends a blueprint for data reduction techniques, along with other best practices with storage. Contributors include a who’s who of technology and manufacturer suppliers, such as ABB, GE, SAP, IBM, RTI, Fujitsu, Intel, Micron, and AT&T, to name a few.

Download the White Paper >>

137

8:51 pm
February 8, 2016
Print Friendly

IoT Offers Reliability Solutions

grant gerke

By Grant Gerke, Contributing Editor

In my coverage of the manufacturing and process industries for the past 15 years, I’ve seen plenty of marketing buzzwords and campaigns come and go. Mechatronics, Sustainability, NextGen Manufacturing, Security 2.0 and, of course, Internet of Things (IoT) are just a few of the recent ones. However, IoT is truly a transformative change for manufacturing and, with it, maintenance and reliability.

This year, Maintenance Technology magazine will start leading our readers through the forest of buzzwords and content to deliver real insights into how your maintenance team can benefit from IoT technology. This bi-monthly column is the gateway to a steady stream of IoT content at maintenancetechnology.com/iot. Our online destination will include podcast interviews with subject-matter experts, application insights, video reviews, and content from leading experts.   

IoT is nothing new for maintenance teams, with third-party services already playing a huge role in operations and, in turn, more connected machines and systems. Machine analytics made possible by ubiquitous sensors, robust networks, and standard interfaces create new opportunities and solutions for enterprises. This isn’t a marketing campaign for the next couple of years, it’s a structural change.

One example is remote vibration analysis for large enterprises as they try to consolidate resources across multiple plants. In a 2015 post on the Emerson Process Experts blog, Jim Cahill cited a power-producer application in which personnel “remotely monitored their rotating machinery to improve reliability and prevent disruption for their customers.”

A North American power company used Emerson’s machinery health monitors for critical machines in three different facilities and tied them back to its predictive-maintenance server. For non-critical machinery, the maintenance team uses portable analyzers to gather information (things) and then uploads the data to predictive-maintenance software. Using the tools, maintenance activities are performed jointly by specialists at the company and Emerson Process Management, St. Louis.

1602iot01p.jpegThe solution allows plant and enterprise management, with accredited security credentials, to observe key indicators from a PC, smartphone, or tablet. Smart-alarm features are also included for critical equipment. “If vibration exceeds a predetermined alarm, then signature and waveform data are immediately saved for analysis,” according to the blog post. A yellow or red indication appears on a device’s screen and provides “specific points and parameters in the alarm.” 

This is but one example of Internet of Things in action. Suppliers are just beginning to realize better ways to handle more data points in the factory or field.

Working on an article about a manufacturing standard for multinational companies a couple years ago, I stumbled across the “Internet of Things Strategic Research Roadmap,” produced by the IoT European Research Cluster.

This groundbreaking 50-page research paper provided a comprehensive and structural view of IoT in 2011, for manufacturing and consumer applications. It’s interesting that the paper includes a passage about the year 2015: “By 2015, wirelessly networked sensors in everything will form a new Web. But it will only be of value if the ‘terabyte torrent’ of data it generates can be collected, analyzed, and interpreted.”

As we can see, that torrent of data has arrived, and collecting, analyzing, and interpreting data is a major challenge. Big changes are never easy in any walk of life, but keep visiting maintenancetechnology.com/iot for vital IoT applications and insight. MT

Grant Gerke is a business writer and content marketer in the manufacturing, power, and renewable-energy space. He has 15 years of experience covering the industrial and field-automation areas and has witnessed major manufacturing developments in the oil and gas, food, beverage, and power industries.

1337

12:06 pm
June 12, 2015
Print Friendly

Gateways Make Systems Multilingual

Industrial settings, where multiple types of equipment must communicate with each other, can benefit substantially from an Ethernet gateway (left).

Industrial settings, where multiple types of equipment must communicate with each other, can benefit substantially from an Ethernet gateway.

By Rick Carter, Executive Editor

Industrial Ethernet gateways streamline equipment communication by handling protocol conversion. They’ll also monitor energy use and add functionality to older equipment. 

If you don’t know about the industrial Ethernet gateway, you may be missing an opportunity to improve communication among your automated equipment, make older equipment more functional, and simplify activities such as network troubleshooting and energy monitoring. The Ethernet gateway—also referred to as a protocol converter or simply a gateway—is a standalone device that converts a signal from one protocol to another. It can convert a Modbus RTU (remote terminal unit) to Modbus TCP (transmission control protocol), for example, or make other conversions, such as Modbus RTU to PROFIBUS or Profinet. It achieves this with a built-in CPU and memory-storage capability that allows connected equipment to communicate directly with the gateway instead of with each other using the PLC. This eliminates the need for separate—and more complicated—protocol-conversion processes, and is handled entirely by the gateway.

0615f4-2

This is a typical gateway connection linking a variable-frequency drive, using Modbus RTU, with an Ethernet-enabled PLC.

New capabilities

Gateways have existed for at least a decade, but have recently acquired new capabilities. “They’re smaller and more compact and there’s support for more protocols,” said Paul Wacker, product manager, Americas, for Moxa, a Brea-CA-based maker of gateways and other automation solutions for industry. “And for maintenance, there are more maintenance and monitoring features that make it easy for troubleshooting and access to data, such as for plant-energy use.”

The added protocol capabilities also help extend the life of equipment that uses older protocols. “One of the gateway’s real values is making something old work with something new,” said Wacker, who uses variable-frequency drives (VFD) as an example. “Most drives installed within the last five years have a built-in Modbus port,” he stated. “But Modbus is one of the oldest protocols for communication in industrial automation, and suppose you need to get an older drive to talk to a newer PLC that doesn’t do Modbus. Instead, it does, say, Ethernet IP, which is the predominant standard for some PLC makers in the U.S. There aren’t many options that allow you to do this.”

The available options might include buying a conversion card for the PLC or installing a more-complex third-party device that would need a program written for it. The simpler, less-costly gateway option, said Wacker, requires, “dropping in a small DIN-rail mount box and, through simple configuration of fill-in-the-blanks, telling the gateway what you want to read, and to make it available to your PLC on Ethernet IP.”

Installation and set-up does require planning, of course, but the fill-in-the-blanks approach was devised to eliminate the need for a programmer, IT technician, or engineer. “We spent a lot of time making sure it’s easy to set up and use,” said Wacker. “So you don’t have to be an engineer who must know all the intricacies of all the protocols, which can be detailed. We’ve made it high-level enough so someone with basic skills can put it together.”

The gateway device is also designed to be versatile. It allows monitoring of various aspects of a VFD, for example, after the system is up and running, and makes it easy to add VFDs simply by reserving I/O space in the PLC and the gateway configuration for future expansion. When new drives are added, only the gateway needs reconfiguration, not the PLC.

Gateways are also used in the field for oil and gas well-head monitoring, and are acquiring a growing role in power monitoring. The Modbus serial ports found on most power meters installed within the last 10 years make this easy, said Wacker. “Using this, the gateway can tie into the meter and pull all of them back to a central-monitoring application.” This leverages the gateway’s ability to give users a window into the data that travels through it, a benefit that’s also helpful when troubleshooting hard-to-find network problems.

“A common troubleshooting problem is when a worker moves a few connections or disconnects something and doesn’t put it back correctly,” said Wacker. “It could also be as simple as a tripped circuit breaker that’s not allowing communications with the attached devices, or it could be a broken cable, but these are typically hard to locate.

“Because the gateway allows you to see the communications going in and out of it, this information is there whenever you need it. This makes troubleshooting far easier than other methods, which usually involve bringing in a laptop with third-party software to locate the problem. And you can do it remotely.”

Industrial Ethernet gateways, though they have existed for years, are now smaller and have more protocol capability than their predecessors.

Industrial Ethernet gateways, though they have existed for years, are now smaller and have more protocol capability than their predecessors.

Challenges

Plant personnel face two challenges regarding the gateway, said Wacker. “The biggest may be simply not knowing the available solutions,” he stated. “People might put this off thinking you need to know a lot about programming because they’re seeing it from a PLC-centric standpoint. The other challenge is that, even though Modbus has been out there forever, younger controls engineers might not be as familiar with Modbus or may not be as experienced with communications issues. In these cases, the challenge is knowing what to ask on these projects and knowing how to put together a solution [using Modbus]. For someone new at this, knowing what to ask can be hard, so it’s important to know what help is available.”

Naturally, this help can come from any gateway maker, such as Moxa, Schneider Electric, Siemens,  and Comtrol, or a plant’s systems integrator. But this task is likely to only become easier. “Right now, choosing a gateway depends partly on what you’re connecting,” said Wacker. “There’s still no such thing as a universal gateway that connects everything. But we’re working on that. We want to have just one gateway that handles many different protocols, perhaps just by turning on special features.”

Wacker also expects future gateway designs will offer more detailed views of the data that travel through them and be able to collect the data. “Users want to offload information to the gateway so it can collect it, store it, and make it available for retrieval by something else,” he said. “We want to make this process more efficient,” added Wacker, “so the gateway will become a consolidation point that might take different types of data pipes and allow the PLC or factory-monitoring application to bring a big chunk of that data over all at once.”  MT

4794

1:50 pm
March 12, 2015
Print Friendly

Three Ways To Communicate Between Mobile Devices And PC-Based HMIs

0315f2-1

Mobile technology allows instant, on-demand access to production data from anywhere. But users must still select the correct method for establishing communications.

By Jane Alexander, Managing Editor

If your company is like most, it could benefit from communications between your automation systems and mobile devices such as laptops, tablets and smartphones. Many recent technology advances have increased options in this area, making it easier for plant personnel to get the data they need via their preferred (and approved) mobile devices.

For many applications, the PC-based Human Machine Interface (HMI) has emerged as the main gateway between automation-system controllers and operations personnel, according to Jeff Payne of the Automation Controls Group with AutomationDirect.com. These applications run the gamut from control of a single machine to automation of entire plants.

“In a typical setup,” Payne says, “PC-based HMI software is purchased from a supplier and configured by the user to communicate with the automation system controllers, such as PLCs, programmable automation controllers and other intelligent devices. The PC-based HMI provides local operator interface at the plant, but most facilities can benefit from expanded access via remote devices.”

Modern PC-based HMI software is usually provided with a means to establish communications with mobile devices. These communications can generally be two-way, with the PC sending data to the mobile devices, and with the mobile devices sending commands to the PC. Payne explains the three main ways of providing this two-way communication from PC-based HMIs to remote devices: directly from the PC, via onsite IT systems and via the cloud.

1. Direct access

Today’s PCs come with many built-in communication capabilities. When coupled with the latest in PC-based HMI software, a powerful platform is created for managing remote devices. The simplest way to establish communications with these remote devices is through the HMI software’s built-in Web server.

For Web-server communications, the PC-based HMI is connected to the Internet via an Ethernet connection. Users can configure the HMI software to serve pages to the Web, and these pages can be accessed by mobile devices through any Web browser. Once the HMI’s Web server is accessed, the mobile device can be used to view data, and also to send commands to the HMI.

HMI software providers with mobile access capability typically use Apache HTTP and Microsoft Internet Information Services Web servers. These mature Web servers continue to evolve and provide centralized SSL certificate support, IP security and client security mapping to ensure a safe, secure connection.

The HMI software can typically be configured to provide varying levels of access for different users. For example, a plant automation engineer may be given full access to view all Web pages, along with the ability to make changes to automation-system setpoints. Payne says a plant manager may only need to view one or two pages showing key performance indicators such as throughput, energy use and quality parameters. Access is controlled by log-in credentials, giving the HMI software a way to uniquely identify each remote user, and to provide each user with only the required level of access.

The main advantage of this method is its simplicity. Only one PC-to-Internet connection is needed. The Internet becomes the network, so there’s no need to establish and maintain a separate IT network to link the PC-based HMI to the mobile devices. There is also no need to change the graphics when the device size changes.

Payne notes that the relatively new HTML5 standard makes implementing this option easier. With HTML5, the application launches in a mobile-device browser, and automatically resizes the HMI screens to fit the device size. A review of just one smartphone and tablet manufacturer finds over a dozen screen sizes. In the past, when devices were presenting data, some were compatible and some were not. HTML5 overcomes this obstacle when displaying data and Web pages, as most all HMI software packages and mobile devices conform to the HTML5 standard.

Users can start small with just one or two Web pages showing key data which can be accessed by all. Web pages can be added at any time to provide more information. Users can start to discriminate among users so each person or group is only provided with the required level of access.

The main drawback of this approach, Payne acknowledges, is complete dependence on the link from the PC-based HMI to the Internet, as the plant’s Internet service provider becomes the critical link in the data distribution system. There can be security concerns, although modern HMI software provides mechanisms for controlling remote access.

0315f2-2

Fig. 1. This PC-based HMI provides local operator interface at a machine, and can also serve Web pages which users can access from any browser on their mobile devices.

A typical case is when a single PC-based HMI is used to provide local operator interface at a machine (Fig. 1). This PC is then connected to the Internet via its Ethernet port, and is configured to serve Web pages to users via the browser-based interface on their mobile devices.

0315f2-3

Table I lists pros and cons associated with the direct-access approach, and compares it with onsite IT and the cloud.

2. Onsite IT

Payne says using onsite IT to distribute data from PC-based HMIs to mobile devices provides more power and options than with direct access, but is more expensive and complex. With this option, the PC-based HMI is connected to the plant or company internal IT system through its Ethernet port. In systems with multiple PC-based HMIs, each can be connected to the network. Mobile devices access the PC through the IT network, instead of through the Internet as with direct access.

This option requires an internal IT network to be set up and maintained. Although most plants have such a network for office use, extending the network to the plant isn’t a trivial exercise as it requires close cooperation between the plant’s automation personnel and IT staff.

“The IT staff is likely to see the PC-based HMI as just another node on its network, and treat the PC-based HMI just like it would an office PC,” Payne explains. Among other things, this could mean automatically sending updates and patches to the PC, and remotely rebooting it as required. This is rarely a good idea for a PC-based HMI, he says, as each update or patch must be tested to make sure it doesn’t affect the HMI software and its connections to controllers. Also, reboots must be carefully scheduled so as not to affect production.

“On the other hand,” says Payne, “this method allows plant automation personnel to use existing IT networks along with established remote access practices.” Since most IT departments already have procedures in place for secure remote access, often via VPN, this access can be very tightly controlled to provide a high level of security.

A large manufacturing facility with multiple PC-based HMIs is connected to the plant’s IT network to provide mobile device remote access.

Fig. 2. A large manufacturing facility with multiple PC-based HMIs is connected to the plant’s IT network to provide mobile device remote access.

For mobile-device users, access to the PC-based HMI via the onsite IT network will typically be more complex than with direct access. This is because extra steps will be required to first establish communications with the IT network, and then with the PC-based HMI. Payne describes a typical use as a large manufacturing facility with multiple PC-based HMIs, each connected to the plant’s IT network (Fig. 2). IT would work closely with the plant’s automation staff to provide the required mobile device access.

While the onsite IT option works well in many cases, Payne says it does burden existing IT staff and systems. Moreover, it requires close, ongoing cooperation between IT and a plant’s automation staff—something that may not always be easily achieved. To deal with these and other issues, he says, many plants look to the cloud as a means to establish and maintain communications between their PC-based HMIs and their staff’s mobile devices.

3. The cloud

With this option, the network resides in the cloud instead of with internal IT. Each PC-based HMI is connected via the Internet to a rented network in the cloud. To provide greater reliability, there can be multiple redundant connections from each PC to the cloud, such as through an Internet service provider and a leased communication line.

Network and storage space in the cloud can be rented directly through a provider such as Amazon or Rackspace. This is the lowest-cost option, but requires a degree of IT expertise as the user must interface directly with the cloud company to define needs. Alternately, third-party companies can also provide cloud services to manufacturing and other industrial concerns. These have the required IT expertise to deal with the cloud provider, as well as understand unique manufacturing concerns.

A wide variety of corporate, production and control data can be stored in the cloud and quickly accessed via mobile devices through a Wi-Fi or cellular connection.

Fig. 3. A wide variety of corporate, production and control data can be stored in the cloud and quickly accessed via mobile devices through a Wi-Fi or cellular connection.

In either case, Payne says, mobile users access data through the cloud, requiring only an Internet connection. This connection is typically established through either a Wi-Fi network or through a cellular provider’s 4G network. So if the mobile user can establish an Internet connection, he or she can access data stored in the cloud (Fig. 3).

Storing data in the cloud provides a high level of security given the fact that cloud providers maintain large staffs of IT personnel who are well-versed in security. Still, hackers are continuously trying to breach these high-visibility targets, as breaking through Amazon’s security system is more attractive to the average hacker than accessing a small manufacturing plant’s IT network.

A typical cloud-use case, Payne says, would be a facility or number of facilities owned by one company, all requiring remote access via mobile devices from a widely geographically distributed workforce.

Expanding mobility across industry

While PC shipments have been mostly flat over the past five years, tablet and smartphone sales continue to exhibit strong growth. Payne references a 2013 report by Forbes magazine that more than 56% of all adult Americans had smartphones. That number is projected to reach 70% by 2018. This, he says, plays directly into a more recent trend known as Bring Your Own Device (BYOD), in which plant personnel in some operations use their own smartphones and tablets to access PC-based HMIs. “Once corporate policy catches up,” he observes, “most will be able to use their own devices within the next few years, with HTML5 a key enabling standard.” MT

For more information on these tactics, visit AutomationDirect.com.

1758

7:45 pm
February 17, 2015
Print Friendly

Living With And Learning From Your Data

0215f2-1

Big Data can be too big for some. Getting a grip on it—and its value—means separating wheat from chaff, say experts, and acting on revealed trends.

By Rick Carter , Executive Editor

A January advertisement for SAP claimed that “complexity” costs the world’s top 200 companies $1.2 billion annually. It goes on to say that “Simple saves.” And while some maintenance professionals might find SAP’s plug for simplicity amusing, the business-management software giant is on the mark—not just for the eye-catching dollar amount, but for the cause of those wasted dollars.

Definitions of “complexity” vary, of course, but in the current manufacturing environment, one factor maintenance pros increasingly view as a complexity contributor is data. Suddenly, there seems to be a data surplus. You can’t live without it—the challenge, in fact, has always been to obtain more data—but technology has now met the challenge, and then some.

“With the continued adoption of industrial automation systems, device and equipment data is originating from a variety of technology platforms,” says Juan Collados, Principal Applications Consultant for Schneider Electric. “This includes SCADA and distributed control systems, safety management systems, manufacturing execution systems and mobility applications, to name just a few. Add to that the Internet of Things, where billions of devices and machines are becoming interconnected on a global basis, and we can see why we now have an overabundance of data.”

ABB’s Kevin Starr, Director of Product Management for Process Automation Service, likens manufacturers’ exposure to data as taking a drink from a fire hydrant. “You get really wet,” he says, “but you don’t know what hit you.”

Context drives value

For Starr, Collados and others tasked with making sense of data for clients or crew, determining exactly what does hit you is the real issue, and cannot be a random action. The same technology that provides the quantity can manage and guide it, they say, but it’s first necessary to know exactly what is valuable for your operation. Data is too often “served up without specific context,” says Collados. “Acquiring quality data and transforming it to actionable information, therefore, becomes a focal point in enabling an effective asset-management strategy.”

Here, “quality” means data that has value in its ability to provide useful interpretation of equipment status and trends. But with so much data pouring in, value takes on new meaning, too, says Gil Acosta, Director of Engineering Services at eMaint, a New Jersey-based provider of CMMS software-as-a-service solutions. Asked what he tells clients who are looking for guidance on how to handle the abundance of incoming data, Acosta says he guides them “into finding the metrics of importance. I’m careful to use the word ‘importance’ because it’s easy to get caught up in the standard measurements out there. And if you get too involved in them, they may not be that meaningful at your place of business.”

Metrics that should be reviewed, suggests Acosta, include Mean Time to Repair (MTTR) and Mean Time Between Failure (MTBF). “These are old, reliable measurements that are still useful, but don’t always tell the whole story any more.” They carried more weight, he says, when reliable data was in short supply. “They were a way of getting a small sample size and turning it into information. Now, with the massive amount of data that we have, we have way more calculations available to us than just MTTR or MTBF. We have trend analysis.”

Trend analysis can be as simple as a “change in slope,” says Acosta. It can occur over any period of steady data input of important measurements, as opposed to the former need to catch changes during the brief windows of observation that were typical in traditional data-capture techniques. Important measurements are anything condition-related: temperature, pressure, hours of operation, amperages and others. “And with many condition data points, you can start to monitor trends,” says Acosta, “and see changes in the condition of that asset, making MTTR and MTBF less useful.”

Most of the maintenance pros Acosta meets quickly agree on where such data detail can lead. “It’s the kind of information they have been missing for years and would love to suddenly have,” he says. “Things like cost of ownership, energy consumption, labor consumption, parts consumption. So when they learn how their work-order system collects that data, and how easy it is to get those reports out of the system, the process itself almost becomes an afterthought because it’s so simple. It’s the concept of starting with the end in mind,” he says, “and I can’t tell you how many times I’ve used that phrase. I tell folks to concentrate on the data you want to extract from the system to help you make better decisions. Invariably, I’ll get three or four things right away. Then I’ll say, let’s work backwards. Let’s make sure the data source is there to answer that.”

Let your system do the work

Starr uses a simple analogy to describe ABB’s current approach to providing data context from automated systems. “We’ve put in devices that assimilate the information and sort it,” he says. “And if you can imagine the old game Stadium Checkers with the marbles that would fall into holes at different levels, that vision is really accurate because you have all these levels now, and so much information, but nobody can see anything, so you have to filter it so it catches.”

Some call this process “data reshaping,” says Starr, which “basically means that you’re looking at statistical computations and probability of wave patterns in the data that correlate with known problems. This allows you to then sort it into smaller bins a human can look at and interpret. That’s the new skill I see evolving and is very much needed.”

When it comes to automated systems, this skill is sometimes more easily handled at the vendor rather than plant level. Starr therefore recommends that every time you add a layer of automation to make your life easy, you should add a service component that makes sure that level of automation gives you the right answer. ABB offerings in this regard include its ServicePro Service Management System, a global, real-time database of proven maintenance best practices and schedules.

“ServicePro,” Starr explains, “allows us to look at failure rates from the factory, replacement times, how long should it take, how often should it fail, and we make comparisons with global figures to this particular site. If a part fails on average once a year, but at your plant it fails once a month, we know something’s wrong. One of the issues with data,” he adds, “is that you can make it read whatever you want. But when you have a global average and you have thousands of items—we’re now managing 600,000 parts—you get updates every day” and, ideally, a relatively clear path to problem-solving.

An in-field factor that lends support for a service like this is equipment age. Certain older control systems can make data-capture difficult. “For example,” says Starr, “if the system is not OPC-compliant—and there are many out there like this—you can really be flying blind. A lot of [manufacturers] are trying to compete in this century with old technology. And if you don’t maintain these assets, at some point you’re going to be out of business because you just can’t see the information.” While this stance usually requires him to explain why new systems are so much better, Starr says this is an easy task. “The new systems have fail safes, redundant servers and decoupled processes for collecting data so you can’t harm them by trying to extract data [as in some older systems]. And when customers see what can be done, they’ll often say they want this every day, which means they’re talking about an integrated solution.”

Collados concurs that the newer your equipment, the better your data analysis can be. But he also recommends creating system and data solutions that don’t depend on a single vendor’s approach. “Maintenance professionals should ensure their data collection and analysis solutions are vendor-agnostic to both the sources of data, control and safety platforms, and the business systems (CMMS\EAM) being utilized to manage industrial assets,” he says. “Ease of use leading to end-user adoption is critical, where applications should provide an end-user experience that offers a clearly perceived benefit relative to the implementation and operability investment. Applications should be uniquely intuitive and offer standard configuration ease-of-use concepts such as wizards, drag & drop and templates. Equally important, is support from all levels of management in implementing a clearly understood asset-management strategy.”

When data speaks

Getting management support is clearly an ongoing challenge for some maintenance operations. With modern data elements, the process can be much easier, provided data meanings are properly collected and conveyed. “Too many maintenance professionals treat every asset in the plant the same way,” says Acosta. “This is often because it’s so obvious to them when an asset needs replacing that they don’t bother to calculate the return on investment. But not everything is a bald tire. They need to be able to say it’s costing X per month to maintain it, the number of PMs it needs is up, the warranty has expired and it’s near failure. They then have to be able to say, if you give me X to replace this, I’ll give you a return of some sort.”

The response to this approach is typically an approval for funding, says Acosta. But too often the story is not presented that way and the request will be “added to the list.” Noting that the data to support this type of story is probably already in the CMMS, Acosta adds that “you must do the homework to understand what the investment part of it is, and then interpret what you’ll get back. For example, how will current costs change once I make the investment? Answering that means tracking the labor, the parts, the oil consumption and everything else that goes into maintaining an asset.”

And that’s where Big Data can simplify the entire process. Not only can it rapidly take maintenance and reliability teams many steps forward on their continuous-improvement path, it can help bridge the skills-shortage gap most operations now face. “It used to take a person five years to get to the point where they could really maintain a site,” notes ABB’s Starr. “Now, with the tools we have, we can take somebody who is relatively new to the industry and in six months to a year, they’ll be doing work that took me 10 years to figure out. So we’re moving in the right direction.” MT

The 3 Main Types of Data

Data-collection-and-analysis training requirements can be described from the following three distinct, but interrelated perspectives, says Juan Collados, Principal Applications Consultant for Schneider Electric.

Disconnected and Stranded Assets
Equipment and devices outside of the automated control and safety network, where a mobility solution can be effectively implemented. This often comes in the form of operator rounds or planned inspection activities supported by mobile data-capturing capability. Data is either manually collected or automatically entered through handheld device such as an infrared camera. Condition-based data for this asset base also often originates from third-party services, such as oil spectroscopy and analysis contractors. Regardless of its origin, the data is still relatively raw. However, it can be monitored with rules- and template-based condition management applications to make it truly actionable.

Instrumented Equipment and Components
This asset base is typically within the control and safety network and can provide valuable maintenance-relevant data that can be transformed into actionable information through condition-management solutions. This data is typically stored in process historian platforms and can be collected through a variety of “data source” communication protocols such as ODBC (Open Database Connectivity) or OPC. Condition-based maintenance rules utilizing analysis tools such as thresholds, statistical process control or even simple expressions can then be associated with the collected data. The analysis should yield the desired results, typically in the form of notifications to maintenance and operations, automatic generation of contextual event-driven maintenance work orders or requests, and an ongoing optimization of the overall maintenance plan.

Intelligent Devices and System
This asset base generally resides within the instrumentation layer, but can include larger equipment and systems. Both can provide critical process and maintenance-relevant data, including, but not limited to, information relating to its current state and health through its self-diagnostics capabilities. It includes devices, such as smart-valve positioners and transmitters (level, pressure, etc.), as well as major equipment, such as HVAC systems in a data center or cleanroom. This type of asset provides advanced data broadcast capability directly through the base-level automation network. It utilizes a variety of real-time digital communication fieldbuses, such as HART, Foundation Fieldbus and Profibus, to name a few. Despite its complex nature, device manufacturers now provide rich human-machine interface applications that empower end-users to easily interface with this type of device or equipment. Vendor-neutral HMIs based on open standards are also available and increasingly relevant since one HMI can be used with any device regardless of the original manufacturer. In many cases, smart-device manufacturers provide not only the quantity of data required to manage the asset, but also offer richness to information quality and context.

Don’t miss Juan Collados’ free Webinar “Lower Your Maintenance Costs Through a Condition-Based Management Approach,” Thursday, Feb. 19. For information or to register, visit maintenancetechnology.com/SchneiderWebinar.

Navigation