Archive | November, 1997


12:03 am
November 8, 1997
Print Friendly

Seven Ps Of PM

bob_baldwinOver the years I have collected scraps of information from articles, technical papers, conference sessions, and conversations in the maintenance community. On occasion those bits seem to form patterns such as the Seven Ps of PM.

Panic Maintenance: Maintenance performed to repair a failure; often called reactive maintenance after the panic subsides.

Preventive Maintenance: Maintenance performed according to a schedule designed to prevent failure.

Productive Maintenance: The Japanese version of preventive maintenance which includes life cycle cost management issues.

Predictive Maintenance: Mostly condition assessment with a potential for data-driven prognostication.

Planned Maintenance: A method for increasing maintenance efficiency and effectiveness by coordinating information, tools, and materials for maintenance work.

Proactive Maintenance: A comprehensive maintenance process in which work orders originate with maintenance rather than operations.

Professional Maintenance: Maintenance that integrates the PMs and other technologies and techniques into an effective and economical process for managing equipment assets.

Successful maintenance organizations do well with all the PMs. They also exhibit characteristics exemplified by seven pillars of maintenance excellence.

Paradigms: Constantly revise the model or paradigm of maintenance excellence to reflect value systems important to the enterprise and develop appropriate systems for measuring it.

People: Provide people with training and information to allow them to do their jobs more effectively and reward proactive performance.

Practices: Search out and employ best practices for managing machinery reliability and maintainability, information, and people.

Passion: Demonstrate enthusiastic belief in the maintenance mission and use principles of effective leadership.

Persuasion: Effectively communicate to all departments the positive relationship between reliability and maintainability objectives and enterprise objectives.

Perspective: View maintenance as a value-adding profit-center rather than a cost center or an end in itself.

Processes: View maintenance operations as a process, especially one that can be analyzed and continuously improved using the Deming (Shewhart) quality cycle of Plan, Do, Check, Act.

Thanks for stopping by,


Continue Reading →


3:59 am
November 2, 1997
Print Friendly

Assessing Maintenance Performance

Experience of large multiplant company serves as model for developing a benchmarking process and turning the findings into a strategy for improving maintenance and reliability practices.

A growing number of industrial plants are expending considerable effort reviewing their relative competence in maintaining reliable equipment at a competitive cost.

Some are using the traditional approach of consultant-provided surveys or audits; some use a benchmarking approach to help quantify capabilities and compare with other plant data; others are quite comfortable to use their own cost and equipment availability measurements.

DuPont, driven by a global maintenance expenditure of $1.5 billion, has been involved in benchmarking and other forms of assessment since 1986. The concept of “Best Practices” surfaced at DuPont in the late 1980s as it distilled the beliefs and activities of numerous world class benchmarking partners.

Rohm and Haas has similarly been involved with benchmarking partners for the past 3 years, and has used the results to develop its own view of world class practices with a two-tiered assessment process.

The first tier provides an initial assessment that is more subjective, but provides an in-depth look at a site’s practices. The process is team-based and develops a consensus of priority issues that the team can use to drive strategic planning. The second tier process is more quantitative, scoring the site against a more rigorous excellence model.

The observations that follow reflect the benchmarking experience of the authors.

Linking assessments and strategies
When teams are involved in the assessment process, they typically invest a lot of energy and emotion in the critical investigation of their maintenance practices. One way plants have successfully harnessed this energy has been to involve the same team (or parts of it) in developing a future strategy for the plant.

The assessment processes will deliver an enhanced (and often quantified) view of the maintenance practices most in need of improvement. This understanding, combined with the fresh energy of the team, provides an excellent launch environment for developing the strategic plan. Perhaps the most important issue in commissioning a strategic planning team is setting a clear objective and an aggressive timetable. The best strategies are usually the ones developed most promptly after completion of the assessment. If development drags on for 6 months or more, strategies are often superficial and half-hearted.

The strategic planning effort is intimidating because the task list and the resources required are substantial. It takes most team members time to understand that the strategic plan is a long-term process, usually covering 2 to 3 years, sometimes more.

Even if the resources were readily available, many aspects of an improvement plan will involve shifting the plant’s culture and will take several years, regardless of available resources.

Experience suggests that the longer the current culture has been in place, the longer the time required to shift the culture. Some plants never make the shift.

The strategic plan must begin with a clear statement of objectives reflecting the key business benefits of having a strategy.

The objectives should clearly describe a vision of the improved maintenance activity and the impact it will have on the enterprise. Performance measures are a necessary tool for tracking the plan’s progress and its evolving benefits. While some of the measures may be those used in the benchmarking or assessment, the strategic plan measures serve a much different purpose–tracking local improvement progress. They are usually fewer and more focused measures than those used in benchmarking to compare the plant to other sites.

Strategic plan measures are accompanied by goal levels representing the improvements being sought by the plan.

Essential parts of a strategic plan
A well-developed strategy needs more than just action steps. While action items in the plan represent activities that ultimately will change or improve maintenance, they are not sufficient by themselves.

To be successful, the strategy must be fully supported by management. It can obtain that support only by demonstrating a tangible benefit to the business. In most cases, the business contribution is related to equipment reliability, equipment maintenance costs, or both.

A complete maintenance strategic plan should include the following elements:

  • A clearly stated objective for the plan
  • An executive summary that briefly describes the scope of the plan and the benefits
  • A listing of the assumptions related to the plan
  • A calculated stake or payback from the strategy
  • A summary of any risks associated with the plan
  • The task items (action steps) representing the actual change effort
  • An assessment of the resources necessary to carry out the task items
  • Estimates of elapsed-time requirements for each task
  • Charts and diagrams as appropriate to track progress
  • A selected set of tracking performance measures for the plan.

Executing the strategy
Perhaps the most difficult part of the improvement process is carrying out the strategy. There will be continuous competition for the time and resources necessary to execute the plan.

The long-term commitment from site management will be acquired through a compelling and persuasive business case presented as part of the plan. Sustaining the commitment will require regular progress reporting, clearly showing performance measures that support the strengthening capabilities.

Successful strategies have clearly defined tasks with clearly defined accountabilities. Nothing has proved quite so effective as the following sequence:

  1. Encouragement from management
  2. Expectations set by management (goals)
  3. Clearly stated task descriptions
  4. Single-point accountability for task completion
  5. Periodic progress reporting.

The Rohm and Haas experience
In the early 1990s, Rohm and Haas began to question its deployment of capital for the creation and modification of its manufacturing assets. The company developed its “50/50” initiative for capital deployment aimed at reducing capital by 50 percent and time expenditures by 50 percent.

This stretch goal put pressure on the company to evaluate its construction plans and asset utilization. Instead of only building new facilities, a new focus on the hidden (underutilized) plant began.

This new focus, originally called Maintenance Excellence, assigned the lead role of asset availability to maintenance. It also wanted to change the traditional view of maintenance as firemen called out to fix things and who then returned to the firehouse.

After benchmarking for maintenance best practices in several industries, the focus quickly changed. The company’s maintenance excellence initiative became a reliability improvement initiative. Benchmarking quickly showed the importance of maintenance, operations, and engineering working together to uncover the hidden plant at each site.

The Reliability Initiative started in 1994 focused on deploying the best practices developed from benchmarking. The company’s manufacturing leaders agreed to endorse this initiative.

The reliability policy and two critical business measures, asset utilization (AU) and the ratio of maintenance cost to replacement asset value, were established. The policy was dedicated to improving the reliability of the company’s process plants as a critical part of an integrated strategy to improve business results.

Best Practice Manual
Information gained from the 1994 benchmarking effort became the basis for the company’s Maintenance and Reliability Best Practice Manual, called the Blue Book because of the color of its cover. The book contained seven sections: Leadership, Planned Maintenance, Reliability, Human Resource Development, Maintenance Material Management, Contractor Administration, and Effective Information Management. Each section contained the associated best practices followed by key elements (tactics and implementation).

Blue Book practices were aimed at moving a site from reactive maintenance to proactive maintenance. The book was widely distributed prior to the rollout of the improvement program which involved site-specific strategic plans developed through an assessment process done by the corporate maintenance group.

Modifying the assessment process
The word “assessment” strikes fear into most people, especially when it is being conducted by corporate staff. This barrier to being assessed was overcome through some modifications prior to the first assessment.

The assessment was designed to be qualitative. The concepts of reliability were new to most sites. If the assessment was quantitative (including scoring), the concepts would not be understood because the participants would worry only about the score and its meaning.

In most cases, the site treated the assessment as a way to showcase its progress in maintenance practices. This qualitative assessment involved tours, data reviews, interviews, and the discussion of business objectives in order to develop findings (current practices that are best practices) and opportunities (gaps between current practices and best practices).

The assessment was performed only at company sites to which the corporate group was invited. The corporate group directing the assessments used members of the company’s maintenance community as assessors.

Most assessors were known to the various sites; however, site residents were permitted to assess only sites other than their own.

The process was directed by a steering team that demonstrated management support of the process and tied it to business goals and objectives. The steering team typically was composed of the corporate maintenance manager, business manufacturing manager, and the plant manager. Two cross-functional teams were used to perform the actual assessment.

The four-member visiting assessor team was schooled in the Best Practice Manual. Sets of questions were developed to aid the process to determine a site’s progress. The early assessments included an outside consultant as a team member to explain reliability concepts that were not understood by the participants. In addition, the consultant could probe when the visiting team began to sympathize with the home team.

Setting the agenda
One member of the visiting team acted as the facilitator and prepared the agenda and the data requirements with the home team 4 to 6 weeks prior to the assessment. The agenda is outlined in the section “Rohm and Haas Agenda for Reliability Assessment.” The remainder of the team was made up of maintenance managers from other company sites, preferably from previously assessed plants. This involvement provided a learning opportunity for all participants.

The function of the home team was to provide the required data for the four-day assessment. The data presentation format is outlined in the section “Information Book for Reliability Assessment.” The team also accepted ownership for the findings and opportunities and presented them to others at the site on the last day of the assessment. This ownership required the creation and implementation of a strategic plan for the site.

The home team was composed of four site representatives, typically the maintenance manager, mechanical foreman, production manager, mechanic, and/or operator. Every attempt was made to match the cross-functional needs of reliability with team composition.

Early assessments did not include plant management, but as pressure increased to improve business results, later assessments included area managers.

Assessment process
The assessment begins with a kick-off and site orientation meeting followed by a tour of the facilities. Other activities include data review and interviews with operators and mechanics. The visiting team has a prepared list of questions such as those outlined in the section “Typical Assessment Questions.”

On the second day, mechanics and operators are followed through their work assignments and various practices are reviewed in detail.

On the third day, the teams develop the findings (current practices which are best practices) and opportunities (practices which could be improved to become best practices).

This is the most difficult day for both teams. At this point, the home team has figured out that the visiting team’s mission was not to praise all their efforts. The visiting team endeavors to provide helpful input designed to sell reliability to the home team. This input is the basis for the opportunities that the home team puts into its presentation.

At the end of the third day, the assessment typically generates more than 30 recommendations covering the 11 chapters in the Blue Book. The recommendations are prioritized to facilitate implementation after the visiting team leaves. The AU measure is used for the prioritization. The visiting team identifies which opportunities provide the greatest impact on the site’s AU This typically includes a “bad actor” equipment analysis. If such data is not available, the visiting team utilizes some rules of thumb to assist the home team to prioritize the recommendations.

On the fourth day, the home team presents the findings and opportunities to all the participants, including the steering team. The home team is then directed to prepare a strategic plan (if one does not exist for the site) incorporating the assessment’s results. This plan will be used by the site and business management for implementation. It also provides a baseline for a yearly follow-up assessment.

Sample assessments and strategies
Results from a typical assessment for the category of planning and scheduling follow:

  • Findings (current practices that are best practices)
    • Planner/scheduler in place, and scheduled for formal training
    • Only area assessed that is using CMMS scheduling module
    • Good teamwork between team managers and planner/scheduler
  • Opportunities (gaps between current practices and best practices)
    • Unit is losing significant wrench time due to lack of good planning and scheduling practices by all area personnel
    • Area needs to define a planning protocol and work flow process
    • Agree and adhere to defined work priority system
    • Better coordination with production for ensuring equipment readiness for maintenance intervention, e.g. use the permit request
    • Planner should develop pick list for all anticipated part requirements
    • Mechanics must be given a completed work order which includes a job plan.

Selected strategies from a recent assessment include the following points for an objective to reduce break-in work to less than 10 percent:

  • Strategy 1: Review of measures at area manager level
    • Amount of break-in work, percent
    • Amount of preventive and predictive maintenance work (by day, by schedule), percent of total work
    • Amount of overtime spent on preventive maintenance, percent of total work
    • Amount of call-back for uncompleted items, percent of total work
    • Amount of repeat repairs, percent of total work.
  • Strategy 2: Efficient deployment of permits by day and off-shifts.
  • Strategy 3: Reduction in the number of jobs given to contractors for maintenance work.
  • Tasks for strategy implementation:
    • Develop critical equipment list
    • Area manager sponsors a planning team
    • Area manager lays out expectation to do planned work
    • Form a planning team
    • Develop a planning meeting
    • All work requests come from planning team
    • Establish a permit procedure for planned work
    • Develop, steal, or write repair procedures for critical equipment.

Next steps
The corporate maintenance group completed over 30 reliability assessments in 1996 covering North American and European plants. A similar target is set for 1997. The company’s businesses are utilizing the asset utilization and the ratio of maintenance cost to replacement asset value metrics to guide decision-making on capital deployment. Resources have been deployed to two sites to assist them in becoming company models in reliability. The company has identified “pockets of achievement” in reliability in various facilities. Business results are showing improvement in the two metrics established for this initiative.

The corporate maintenance group reviewed its Best Practice Manual in 1996. A new edition was issued in 1997 (called the Red Book), reflecting the learnings from assessments and advances in the reliability methodology and practices. The Red Book is more specific with practices and was expanded to include chapters on Reliability Centered Design, Measurements, and Assessments. The reliability network within the company is expanding and a training course for first line leaders is being implemented.

This initiative started from a desire to improve the company’s financial performance. It will continue to be driven by the needs of the businesses. MT

Edwin K. Jones, PE, was part of a corporate team that helped E. I. DuPont Co. refine its maintenance practices. He retired from DuPont in 1993 and formed Edwin K. Jones, PE, Inc., 28 Quartz Mill Rd., Newark, DE 19711; (302) 234-3438.

David Rosenthal, PE, is a consulting engineer for Rohm and Haas Co., Bristol, PA; (215) 781-4024. He is responsible for the deployment of reliability best practices throughout the company’s North American facilities.
Continue Reading →


12:06 am
November 2, 1997
Print Friendly

Maintenance – Is There A Silver Bullet Solution?

At a recent maintenance and reliability conference, participants in one session were treated to a commercial sales pitch advocating a specific process as the end-all solution for maintenance. If this wasn’t bad enough at a conference supposedly cleansed of all supplier influences, the advocacy was constructed on arguable examples of shortcomings in condition directed or predictive maintenance (PdM).

The presentation asserted that PdM is applicable to only about 20 percent of total potential failures and is cost effective for less than 10 percent. From the questions that followed, it was clear that the assertion, use of unsupported statistics, and specific examples created a great deal of confusion. One individual in the audience stated that his company’s survey of oil refineries disclosed that most used PdM extensively and were satisfied with the results.

Is there any single “silver bullet” solution to maintenance? Should we even expect a single concept or process to be equally effective for a wide range of industrial facilities ranging from mines to oil refineries, paper mills to food processors, manufacturers to electric power generating stations–each with different types of equipment and maintenance requirements?

Is the concept of “one size fits all” equally applicable to a progressive facility seeking to fine tune a world class maintenance process as well as a facility that functions solely on reactive maintenance where fire fighting skills are valued more than fire prevention? What are the plans and expectations of companies that have already achieved “best in class” and are now refining their maintenance process to extend their lead?

Survey after survey demonstrates that progressive, experienced maintenance professionals are moving toward more PdM. When the condition of plant equipment can be measured accurately and cost effectively, regressing to visual inspections is a misuse of time and resources, not to mention hazardous to equipment. To suggest that visual inspection is a more effective means to gauge gear condition and wear than PdM technologies, primarily lubricating oil analysis, is ridiculous.

In this case, one could argue that perhaps the PdM tests aren’t being conducted properly or at the correct intervals, but not that they are less sensitive to wear detection or less cost effective than a visual inspection. There is too much well-documented experience favoring PdM.

The assertion that condition measurements are applicable to only about 20 percent of total potential failures and cost effective for less than 10 percent may be correct for a specific industry or if numbers alone are considered. However, experience suggests a different conclusion for most facilities, particularly those with a large concentration of expensive rotating equipment when condition measurements’ ability to avoid failures is assessed on the basis of probability, cost, and consequences. All failures are not created equal; some are more likely than others, and some cost substantially more than others.

With that said, PdM is not the solution to every problem. In some cases predictive measurements are too expensive when evaluated against the frequency, cost, and consequences of failure. One facility changes belts on roof-mounted ventilating equipment all at once on a regular time schedule. Why? Because it is more cost effective. For the same reason, a manufacturer overhauls riveting machines based on the number of rivets installed. In other cases, proven, cost effective technology does not yet exist to identify probable failures – turbine blade failures are one example.

For the real answer to the question of a maintenance “silver bullet,” look inside one of your master mechanic’s toolboxes. You will find a broad assortment of tools. The knowledge of when and exactly how to apply each to gain greatest results distinguishes a master craftsman.

I suggest that the illustration extends to a maintenance program. Your best program will be the combination of practices and technology that yields the greatest results for your specific equipment and location on the road to optimized maintenance. Reliability centered maintenance (RCM), total productive maintenance (TPM), and planned and predictive maintenance (PM and PdM) are tools. There is no single “silver bullet” solution to every maintenance challenge. Knowledge of what to use and when will distinguish you as the master craftsman (or woman) of a successful maintenance program. MT
Continue Reading →


9:27 pm
November 1, 1997
Print Friendly

Qualifying Motor Repair On Line

Motor repair shops, whether in the plant or commercial facilities outside the plant, should be able to furnish vibration data on repaired motors for use in condition monitoring or predictive maintenance programs. A requirement for such data is being included in motor repair specifications by an increasing number of maintenance and reliability professionals.


Customers can witness motor qualification on monitor where spectra can be called up on demand or examined live with time waveforms.

Gary Herr, vibration analyst at Demaria Electric, uses a Microlog CMVA55 balancing wizard to determine motor unbalance.

Demaria Electric Motor Services Inc., Wilmington, CA, uses on-line monitoring to augment hand held data collectors in its motor repair operations. Its electric motor test stands now use multi-parameter vibration data collection technology tied to a local area network (LAN) to assure industry standard quality control. The technology, typically used to monitor critical industrial machinery, is easily adapted to the motor test cell environment.

When faulty motors arrive at the repair facility, they are tested to confirm mechanical or electrical problems. Spectral signatures are analyzed to determine incoming bearing condition, balance tolerance, and rotor bar condition. After motors are repaired, quality is confirmed at a test stand.

Multi-parameter monitoring allows all aspects of the motor frequency spectrum to be analyzed for quality assurance. Before and after repair reports contain a percent of change column to justify repairs and give credence to the customer’s predictive maintenance program. Test data are archived for historical reference, giving proof to the motor’s condition of operation upon shipment.

Demaria Electric incorporated the on-line monitoring system to augment its use of hand held data collectors for motor qualification. The system consists of an SKF Condition Monitoring CMMA320 local monitoring unit (LMU), a 32-channel NEMA enclosed vibration monitor with a front panel switch assembly with BNC connectors to access buffered signals and tachometer speed pulses.

The data acquisition device (DAD) is mounted on the wall next to the motor supply test panel which can power motors up to 3500 hp and up to 4160 V. A hinged 90 deg bend of conduit was fabricated and mounted on the motor test panel to allow the transducers to swing freely over the motor under test with 20 ft of lead length. A BNC connector at the end of the conduit gland fitting provides for optical phase reference input.

Six SKF Condition Monitoring integral lead accelerometers equipped with magnets are used for sensor inputs. System software allows for motor point configuration on a personal computer to be downloaded to the DAD which collects the data and communicates directly to the host computer over the LAN.

Accelerometers are placed in horizontal, vertical, and axial planes on both inboard and outboard bearings. Sixteen vibration points are collected on each motor. A complete set of data measurement points typically takes 6 min. Spectral signatures are collected at 1600 lines of resolution and two averages to allow for detailed frequency analysis.

Horizontal parameters include peak velocity at 10 times running speed of the motor under test, peak acceleration at 100 times running speed, acceleration enveloping, and high frequency detection (HFD).

Vertical and axial measurements include velocity and acceleration parameters.

Velocity measurement allows observation of running speed balance condition, 2 times line frequency electrical condition, lower order bearing condition, seal installation, and rotor rub condition.

Acceleration measurement gives an indication of higher order bearing frequencies and rotor bar frequencies. Envelope demodulation will confirm a bearing problem as repetitive frequencies are accentuated.

HFD provides a reliable indication of bearing installation quality, lubrication, and metal-to-metal contact, as it offers a higher frequency overall measurement from sensor resonance which acceleration spectra might not detect. SKF Spectral Emitted Energy (SEE) technology is used to confirm lubrication problems.

Motor test vibration data are sent directly to the analysis computer, running PRISM software. Spectra are updated continuously as the motor under test is exercised. Customers who elect to witness motor qualification can observe the real time aspects of the motor operation indicated on a monitor. Spectra may be called up on demand or examined live with time waveforms. Rolling element bearing condition may be monitored using the software frequency analysis module. BPFO, BPFI, BSF, and FTF frequency overlays on the spectrum point out any bearing fault frequencies.

Rolling element bearing motors are typically run for 30 min to 1 hr to allow for trend development to judge the integrity of the repair.

Large journal bearing motors are run from 1 to 2 hr to allow for proper stabilization of bearing temperatures and to understand how heat influences the rotor balance condition. This condition will determine whether the rotor will be balanced in place at running speed. If this is the case, the motor may be balanced with the company’s SKF Microlog CMVA55 hand held data collector-analyzer at the motor. The motor may also be balanced using the DAD’s buffered outputs.

Motors also may be monitored using existing eddy current probes in large sleeve bearing motors which can be connected to the system. The eddy current probe outputs also may be used for balancing. Polar vector plots make it easy to track phase angle changes over time for confirming unbalance.

System software is easily accessible to everyone in the shop. Motor parameters are derived according to running speed. Templates for a motor under test are easily created and downloaded to the DAD according to the job number. The software has been customized for a wide variety of motor applications. The operator enters a four-digit job number and motor RPM to create the machinery point parameters.

The database hierarchy is based on customer name with the motor shop reference job number residing within its respective set. Each motor point identification includes the motor job number. The software also allows for adding customer machine number, plant name, purchase order, and other helpful information. Notes may also be taken and saved to the particular motor data set.

Large motors (1000 hp and higher) typically are followed out to the field for installation. Only one attribute of the DAD points needs to be changed to allow for downloading shop data motor points to the data collector for on site data collection and baseline comparison.

Motors are run on the base uncoupled to prove sound operation. After alignment, another set of data is collected for on site baseline reference as well as the driven equipment. Data are often collected on a weekly basis (primarily for sleeve bearing motors) to insure proper working condition.

The on-line system has proved to be a valuable resource for the shop and its customers. If a vibration problem exists in the field, shop baseline data may be easily checked. The reporting capabilities of the software prove motor shop compliance with customer motor vibration specifications. Expert system PRISM4 Pro software can be used in conjunction with the system to provide before-and-after motor repair reports with analysis of incoming and outgoing condition. MT

Information furnished by SKF Condition Monitoring, 4141 Ruffin Rd., San Diego, CA 92123-1841; (619) 496-3400; Internet

Continue Reading →


8:15 pm
November 1, 1997
Print Friendly

A Successful Approach To Implementing A CMMS

Defining what the system should manage and planning the implementation are two keys to a successful computerized maintenance management system.

Information is the key ingredient in meaningful decisions on reducing maintenance costs. A computerized maintenance management system (CMMS) is a tool that can provide valuable information about how a maintenance department is performing.

Sandia National Laboratories purchased a CMMS to make good maintenance business decisions through data acquisition and to provide a mechanism for reducing maintenance costs. In order to achieve this goal, a phased implementation was conducted to transition from an old CMMS (and way of thinking) to a new CMMS (and way of managing maintenance). The CMMS was successfully implemented at Sandia because of two important factors: defining what the CMMS should manage, and phasing in the applications of the system in an order consistent with the work control process.

Determining the purpose of the CMMS is the first step in deciding which system to select. If the system is going to be used only to document work being done, then almost any CMMS will do. But if the system is going to manage maintenance activities through data acquisition and analysis, then the choice of CMMS narrows considerably. Sandia took the latter approach in selecting a system that would allow for data gathering and ease of analysis.

A joint application development (JAD) team was given the responsibility to develop and document the requirements for managing maintenance activities. The team consisted of maintenance personnel from managers to craftsmen as well as information systems personnel. The team documented the existing work control process and then looked for ways to improve the process (including all regulatory requirements). Hardware and network requirements were also defined.

The importance of defining the maintenance process and then looking for ways to improve it prior to selecting a CMMS cannot be emphasized enough. A maintenance department has the opportunity to improve maintenance effectiveness when converting from an old system to a new system. Sandia viewed this time as an opportunity to improve the work control process and find a system that would support such an improvement.

Test and review
The implementation of the CMMS began with the development of a detailed plan which stated the order of implementation of each work control process (including training requirements). The plan was developed by the implementation team who had the sole responsibility of replacing the existing CMMS with the new CMMS (work control, hardware, software and network). The team consisted of work control users, information system programmers, network administrator, and computer support personnel.

Other teams were assembled to test, evaluate, and make recommendations to modify specific applications prior to implementation. They were:

Test team—responsible for testing and evaluating the new CMMS against all JAD requirements.

Warehouse and procurement team—responsible for testing, evaluating, and implementing the system’s Inventory and Purchasing modules.

Work control team—responsible for integrating the existing work control process with the Work Order module.

Work request team—responsible for integrating the Work Request application with the way work is received.

Decision team—responsible for deciding on specific maintenance topics relevant to the new CMMS to improve the work control process.

The decision team addressed the issues listed in the box on the first page of this article.

By having many teams evaluate different applications of the system, potential problems were identified and corrected prior to the actual implementation. This methodology of reviewing and testing provided a high level of acceptance by users having a major part in the modification of the system. Without this system acceptance, this project would have failed at implementation.

The actual implementation of the CMMS was performed in two parts: warehouse and procurement process, and work control process. During each conversion, a partial changeover was not considered. On a Friday the old system was being used and on Monday the new CMMS was being used by all core users. This total conversion worked only because of the modifications made to the system by each team prior to implementation and because all training was conducted prior to the actual implementation.

Implementing a CMMS is a systematic process of evaluating the correct order of application implementation. This process can be conducted if the knowledge base of the new CMMS is well known along with the knowledge of current and future maintenance operations. Therefore, the first step in implementing a CMMS is to take all available training classes (user and system administrator) to become an “expert” on how the system works and can potentially be modified. Next, a phased implementation has to be developed, reviewed, modified, and accepted by management. Sandia used the following phased implementation:

1. Test and validation. The CMMS was tested against all established JAD requirements to assure that it would perform all maintenance processes. The entire work control process (with all appropriate data loaded) was tested to understand how the new CMMS administered work from one application to another.

2. Decisions. After a thorough understanding of the system (through training) and a general understanding of how the CMMS administers work was achieved, decisions had to be made to merge the old system and set up the new system. The biggest decision was how to define and set up the equipment assemble structure (EAS). The EAS is the foundation of a CMMS, and a good deal of time was spent defining maintenance tracking levels. All installed facilities systems were defined to the lowest level of equipment maintenance that was to be tracked. This defined what equipment records Sandia was going to keep in the database. The greatest contribution of the EAS is the ability to track maintenance costs at the equipment level (then roll-up them to the system level) in order to perform optimal replacement analysis.

3. Modifications. Every application in the CMMS was reviewed for its applicability to the existing work control process. Each team learned every application, evaluated its functionality, and made recommendations to modify the application (field modifications or additional table requirements) to better fit Sandia’s process. Modifications were then made to the application. This approach insured that the existing work control process was included in the CMMS prior to implementation.

4. Training. Training of all users was developed and conducted by in-house Sandia personnel because they had the system knowledge combined with the work control process knowledge (both current and future). The training program was broken into two groups: inventory and purchasing, and work control. The inventory and purchasing group consisted of all users responsible for procuring and storing maintenance materials (excluding work control flow through the CMMS). The implementation team trained all planners and supervisors on work order flow through the CMMS (excluding inventory and purchasing). Users learned the application and then learned how the application was going to be used at Sandia.

5. Warehouse and procurement. The Inventory and Purchasing modules were the first to be implemented because the CMMS is set up to check for materials before a work order can be moved to in-progress status. All stored material data was converted into the inventory database and procurement personnel started submitting purchasing orders to the Sandia Purchasing Department. Most inventory “bugs” were worked out prior to full implementation of the work control process.

6. Work control. The work control implementation consisted of a comprehensive use of most of the applications in the CMMS . Sandia’s work control flow through the CMMS consisted of work order generation, receival of new work order, detail of work order, assignment of work order, posting of craft daily time, completion of the work order, and closing of the work order.

7. Equipment. The equipment application is the foundation of a CMMS and great care should be taken to correctly set up this application. All equipment data was converted from the old CMMS to the new CMMS equipment application. Each piece of equipment was then placed into the correct location of the defined EAS with the appropriate priority assigned to it. Then the correct equipment specification screen was assigned to the equipment for additional name plate data acquisition.

8. Job plans. Generic preventive maintenance (PM) job plans were written for all equipment types that require PM . These job plans would serve as a template when the PM masters were to be created. The objective was to build a library of job plans that could be used in future PM development.

9. Preventive maintenance. Preparing a PM master in a CMMS takes a great deal of effort, but yields many benefits. A PM master will automatically generate work orders when they are due and specify appropriate operations, materials, labor, and specialty tool requirements. The warehouse will always know what materials are needed for PM and when they are needed. Sandia is in the process of generating PM masters for all equipment requiring PM by using the EAS equipment priority.

10. Failure analysis. Tracking why equipment failed and how to fix it is the final leg of the implementation project. Sandia plans to use technical teams (mechanical and electrical) to define the most common failures for equipment types and define permanent repairs.

The implementation of a CMMS at Sandia has been going on for 2 years and should be completed by the end of this year. The success of the implementation was due to the primary definition of what Sandia wanted the new CMMS to do—allow us to make good maintenance business decisions through data acquisition and analysis. We are now in a position to start generating performance indicator reports to show how Sandia is doing as a maintenance department. Implementing a CMMS is not easy or cheap, but a well set up system will generate the information required for good business decisions for reducing maintenance costs. MT

Bobby Baca is the maintenance engineer at Sandia National Laboratories, P. O. Box 5800, Albuquerque, NM 87185; (505) 844-9057; e-mail

Continue Reading →