Archive | September, 2008

379

6:00 am
September 1, 2008
Print Friendly

Best Practices In Professional Certification Testing

0908_profdev1These authors discuss the arduous process organizations go through in order to put accredited testing in place.

Certification has been the talk of the Maintenance and Reliability (M&R) community for the past several years. This issue is being fueled by several factors that affect today’s M&R industry and its practitioners:

  1. Global competition;
  2. The increasing need for more highly skilled workers, on the mill decks, operating fl oors and in management;
  3. Shortages of skilled workers at all levels of organizations in various regions of the world.

These factors are causing employers, employees and M&R product and service providers to desire professionally “certified” individuals to carry out particular tasks and manage and lead organizations in the maintenance and reliability field. This trend is projected to grow and accelerate over the next five years.

Tasks where managers traditionally expected certified people to execute work involved predictive, non-destructive testing employing technologies such as radiography, magnetic particle testing, eddy current testing and pulseecho ultrasonics. In the last decade, certifications for infrared thermography, vibration analysis, passive ultrasonic analysis and lubricant and wear particle analysis have come into being. Some predictive technologies, including electrical motor and circuit testing, don’t have certification testing schemes in place today, although the possibility for them is being explored by interested groups.

Both written and practical testing is generally applicable to the previously mentioned types of technologies. Technical certification typically is combined with “task qualification” to help an M&R organization assure that a person is “competent.” Technical certifications usually are narrowly focused on a specific technology, tool or method. Professional certification in overall management and leadership of M&R programs typically involves written testing on a broader array of subjects and years of practical experience on the job. This article focuses on the written test portion of professional certification programs with emphasis on testing in the M&R field.

Characteristics of credibility
To be credible and to gain accreditation as a certifying organization, the certification-testing programs that the organization develops must have—at a minimum—the following characteristics:

  • Content of material upon which tests are based (i.e., the Body of Knowledge – BoK) must be current and reasonably available to candidates.
  • The BoK must be based on extensive input by recognized practitioners and potential candidates for certification in the field upon which the testing is to be based.
  • The test must be comprehensive in coverage relative to the BoK upon which candidates are tested.
  • The test—absolutely—must be graded fairly and impartially to determine who passes and who does not.
  • For tests administered to an international candidate pool, the terms used in test questions must, to the extent possible, be universally understandable.

Basis of a test
Many organizations are supplying certification testing in the M&R area today. Scanning just a segment of the field, it is apparent that what these various organizations supply— and how they do it—differs greatly.

Some organizations employ best practice techniques in certification testing. Others may not. For example, many professionals frown upon sponsors that create a training course, then provide a final exam with “certification” conferred on course attendees who “pass” it. (The determination of what constitutes a “passing” score is addressed later in this article, in the section on making a test fair, comprehensive and universal.) The general consensus is that there is an apparent confl ict of interest between training and certification, and that there should be an arms-length (i.e., non-profit-dependent) relationship between training and certification providers. The same problem exists for professional societies that make membership a condition for certification schemes they sponsor.

When the Body of Knowledge (BoK) upon which a test is based is established by too narrow a segment of the profession (e.g., a single commercial training firm or exclusively by members of a professional society), a very serious question is raised: Whose interest is being served when a candidate is certified upon passing a test on the related subject matter, no matter the motivation of the originator(s) of the BoK? This is a particular concern when the BoK is obtainable legitimately only by training course participation or purchase of course training materials.

The most credible certifying activities do not offer or endorse any training courses or even “recommend” any literature covering segments of the BoK. They may, however, provide listings of readily-available books or other sources of information related to the BoK, particularly those from which a number of questions on the certification exam may have been developed—with no guarantee, however, that the questions developed from those sources will appear on any future exams.

The BoK used as the basis for a certification exam must be open to input solicited from a large segment of the affected group of candidates in the profession. For example “proposed” contents of a BoK may be initiated by a group within a professional society. For the initial attempt at listing the “skill-sets” needed for certification, a group of recognized experts in that field should be used. When an update is required later, those who have become “certified” may then be used for initiation of any proposed changes.

To be truly credible, however, the BoK’s initiating or updating group should make a determined effort to solicit comment on and validation of proposed final content from the widest possible professional segment that might be affected at the point of certification and beyond. This would require the proposed BoK to be open to review by all interested members of the profession—not just the members of the initiating organization. In today’s Internet-connected world, this is readily achieved and rather easily documented for future reference. BoK updates typically are done every five to seven years in many professions. The process of initiation or updating can take a year or more to complete and document.

0908_profdev2

Fair, comprehensive and universal testing
A range of certification tests for reliability engineers, maintenance engineers, quality assurance specialists, maintenance managers and maintenance and reliability leaders have surfaced in the marketplace over the past several years. In addition, several certification tests exist for mechanical, electrical and instrument crafts people in the marketplace— but wide acceptance of these has not yet occurred. There is little to no testing for other roles such as work planners or work schedulers or other specialized roles in the M&R field.

To understand certification test best practices, one must start with the discussion of “psychometrics.” According to Wikopedia, “psychometrics is the field of study concerned with the theory and technique of educational and psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals and between groups of individuals. It involves two major research tasks, namely: 1) the construction of instruments and procedures for measurement; and 2) the development and refinement of theoretical approaches to measurement.”

Simply put, the field of psychometrics creates a framework from which reliable and valid comparison on individuals in the whole field of study—such as maintenance and reliability or a particular segment of it. Application of this framework and its rules is the foundation for valid and reliable certification processes.

Using a validated BoK described here as a basis, test developers originate exam questions or “items” emphasizing major points that candidates should “master” in order to be certified in a whole field or segment of it. Certification organizations may desire to have various levels of difficulty in their certification scheme and would design their test questions accordingly from easy to diffi- cult. For example, easy questions may have stems or introductions that are general or quite broad and possible responses that are dissimilar enough so that the correct answer is easily recognized. Difficult questions will have stems that are specific, with a narrow focus and have possible responses that are similar, but only one of which is correct (without being “tricky”).

A more complex scheme may have multiple levels, referred to as “Taxonomic Levels,” that are characterized in the Table I [Ref 1]:

Test developers have found that multiple choice questions lend themselves best and most objectively to the use of psychometrics. These are many types of multiple choice questions that may be used, including but not limited to [Ref. 2]:

  • Those requiring the candidate to choose the one best or correct answer from four or five choices provided (used generally for Knowledge and Comprehension level questions);
  • Questions that require the candidate to match all of the given terms or phrases correctly in order to get credit (used generally for Knowledge, Comprehension and Application level questions);
  • Multiple true/false questions where a candidate must evaluate four or five statements concerning a particular required skill for veracity and select the answer that correctly states which are true and false in the proper order (used generally for Comprehension, Application, Analysis and Synthesis level questions);
  • Questions that require the candidate to choose from a group of items the correct set that solves the problem presented (used generally for Application, Analysis. Synthesis and Evaluation level questions).

Typically, an exam for overall comprehensive certification will have a mixture of questions from all levels indicated in Table I. More specific segment certification exams may have fewer taxonomic levels.

Development of exam questions requires a thorough review for proper grammar and punctuation, as well as the meeting of about 20 other rules for writing fair, accurate and truly valid items that test competence. A unique requirement for exams prepared for more than one nation speaking the same language is that the terminology be commonly understood internationally. In addition, questions that require knowledge of one country’s laws, customs or local practices are excluded from tests developed for international use.

Once a question has been “developed” (it is reviewed and modified as needed for adherence to written rules for preparing them), it is subjected to statistical evaluation. A typical evaluation regimen has at least three or four interrelated processes. (For a diagram of the various processes, contact the authors via their e-mails at the end of this article.)

One example of the most fundamental statistical assessment for proposed exam items is called the “Cut Score Workshop Process.” A group of at least seven (preferably more, up to about 20) recognized experts (e.g., persons already certified at the level the questions are being prepared for) are given a set of “post development” questions in the form of a time-limited exam, so they can “feel the pain” in a manner similar to what a candidate for certification would experience. After taking the exam and scoring themselves with an exam “key,” participants in the Cut Score Workshop are asked to perform the following evaluation of each question and record their “estimates” in writing:

  • We are trying to define the acceptable minimum level of competency necessary for a candidate to pass the exam and to describe the minimum knowledge, skills and qualifications of the candidate for which these questions have been proposed.
  • With this in mind, please review each question presented on this exam and estimate the percentage of minimally qualified candidates that would answer the question correctly, entering your estimate in the columns provided. Estimates from all certified people who took this exam will be aggregated and evaluated to determine validity using the processes and criteria established in [our] procedures.”

During the Cut Score Workshop each group of (volunteer) experts also is provided an opportunity to discuss each question and make recommendations for further development, if needed. Some questions may be abandoned or require modification to a degree that forces them to be subjected to another Cut Score Workshop, as with a newly proposed item. The estimates provided by the group on questions that survive are averaged and evaluated for the variance or standard error of the responses received. The resulting number, called the Angoff Score, is established [Ref. 3]. The average of all estimates for each question must be within a reasonable range of the expected passing score for a full set of questions on any form of an actual certification exam. Each item also must have a reasonably low standard error in order to be accepted. That is, the experts who evaluated the item are in reasonable agreement on it, as refl ected by the small distribution of their estimates. Large standard error of responses signifies little or no agreement, even when no adverse comments are received during discussion of an item.

Note that the Angoff Score is determined for minimally qualified candidates for certification. A second statistical evaluation is performed using a group of average candidates for certification before any question is used for actual scoring on a “real” certification exam. This “Beta Assessment Process” is related to Cut Score Workshop Process. In Beta Assessment, the questions are exposed to between 75 and 150 “average” candidates in an exam setting. Each question is evaluated for statistics that indicate:

  • Degree of difficulty—as measured by number of candidates that select its correct answer;
  • Effectiveness—a metric relating the number of examinees in the upper and lower third of the score distribution who answered the item correctly (should be positive);
  • Correlation—between a given test item and knowledge of the profession that the entire examination (all items on the test taken together) is trying to measure (also should be positive).

When all three of the foregoing statistical requirements meet established criteria, the item may be used for scoring on an actual exam to determine whether or not a candidate will be certified. The Angoff Score stays with its related question throughout the item’s useful life. It is used in conjunction with the Angoff Scores for all questions used for grading a given certification exam to determine the passing score or “mastery point.” To do this, a test developer will determine the average of all Angoff Scores for all the items used for grading on a given form of an exam. Remember that the Angoff estimates were made considering minimally qualified candidates, so the result for an entire exam-set of questions also refl ects this assessment by experts in the professional field. That means that the minimally qualified candidate should receive a (barely) passing grade on the exam.

Statistical analysis of items actually in use for scoring on exams is performed on a continuing basis as part of the “Active Item Management Process.” If any item begins to perform statistically in a manner that does not conform to the established criteria set for difficulty, effectiveness and correlation, it is removed from the active pool of questions and subjected to the “Former Active Under Review Process.” If not abandoned outright, the item will be revised, given a new identification number and subjected again to the Cut Score Workshop and Beta Assessment Processes. Only when it meets the statistical criteria for acceptance as an active item will any semblance of its former content appear and be used for scoring on an actual certification exam.

Any exam used for certification must also be comprehensive. That means the candidate is exposed during the exam to portions of as much material covered by the validated BoK as time and content of the exam question bank allows. Accrediting bodies, such as the American National Standards Institute using guidelines established by the International Standards Organization (ISO) of Geneva, Switzerland, mandate that an activity seeking accreditation for a certification scheme be able to prove the comprehensiveness of its exams. Certifying organizations may use one or more methods to prove that their exams are comprehensive. In the broadest sense, a group of questions from each “skill set” described in the BoK are chosen based on the relative importance criteria established during the latest BoK validation process. For example, if there are five skill sets listed in the BoK and they are all considered to be of equal importance, the number of questions selected from each skill set in the question pool will be equal. Within each skill set there may be a different number of skills required. For example, there may be 30 individual skills spread unequally among the five skill sets. Common practice is to require that a high percentage (say 90%, or 27 of the total of 30 skills) be represented on any given form of an exam. When both of the above criteria (all skill sets covered in proportion to their relative importance and all skills covered to the 90th percentile) an exam may be considered comprehensive.

Another element of fairness is established between forms of an exam. For a variety of reasons, mostly related to security of exam content and administration, different forms of an exam will be provided to candidates within the same exam venue. There may be some overlap between forms (e.g., 60% of the questions on one exam form may be the same as on another). The other 40% of questions will be different. However, to be fair to all candidates, the mastery point of different forms must be very close and actual performance of (average) candidates taking the exams also must be close. This latter requirement must be so not only on the overall exam but within the various skill sets upon which questions are asked. Exam evaluators perform a psychometric statistical analysis process called “equating” to assure this is happening. Exam forms that don’t equate are adjusted by modifying the sets of questions selected until they do. To compensate for forms that don’t quite “equate,” the lower number of the mastery points for all current forms of the exam is used to determine who gets certified and who doesn’t, regardless of the form of the exam to which a candidate is exposed.

Conclusions
Best practice in certification testing requires that developers pay careful attention to and assure that examination content relates closely to a thoroughly validated, current and relevant Body of Knowledge that is easily accessible to candidates. Lists of sources from which exam questions are developed should be openly and easily available to candidates.

Furthermore, there must be no confl ict of interest between those who teach on subjects that may be covered on a certification exam and those that confer the certification upon successful candidates. There also should be no confl ict of interest between the certifying activity and its sponsor, be it a commercial entity or professional society.

Exam questions must be developed following many rules to assure that the content of each can be understood by those who speak and read various dialects of the language in which an exam is taken. Questions must be subjected to statistical assessments that clearly establish Angoff Scores, degree of difficulty, effectiveness and correlation with entire sets of questions used— before actual use to determine the outcome for any candidate.

Exams must be constructed in a manner that assures fairness and comprehensiveness—even when different sets of questions are used to produce various forms. The forms must equate to each other, both overall and internally.

All-in-all, the development, documentation, administration and on-going execution of certification schemes is a timeconsuming and increasingly precise art that demands continuous attention to detail. It cannot be undertaken lightly. The application of psychometrics may demand the use of experts for consultation from the beginning of any certification scheme that an organization desires to be credible in the long-term. MT

Continue Reading →

310

6:00 am
September 1, 2008
Print Friendly

Physical Asset Management For The Executive

Looking for gold in your organization? Doing the right maintenance at the right time for the right reasons on the right equipment is a good way to find it.

In 1979, an MIT report estimated that $200 billion U.S. dollars were spent on the direct costs associated with reliability and maintenance (R&M). It also was estimated that over 14% of the 1979 Gross Domestic Product (GDP) was lost opportunity due to improper R&M practices. This level has continued to increase as a result of aging infrastructure and other reliability-based reasons, to over 20% of the U.S. GDP—or $2.5 trillion in lost business opportunity. This is greater than all but the top three economies of the world! Today, it is estimated that the R&M industry is approximately $1.2 trillion in size, with up to $750 billion being the direct cost of breakdown maintenance (reactive) or generally poor, incorrect or excessive practices.

The primary cause of the loss is that over 60% of maintenance programs are reactive, which includes those programs that were initiated and later failed due to ‘maintenance entropy,’ or collapsing successful programs where the significant paybacks are no longer seen. These days, over 90% of maintenance initiatives fail, 57% of CMMS applications fail and over 93% of motor management programs fail. The primary reason for this dismal state of affairs seems to be the result of the current business mindset, which calls for immediate improvements. The cold hard fact is that it normally takes 12 to 24 months for a supported program to take hold and begin to show results.

Proper R&M best practice processes have a direct impact on equipment availability, throughput capacity and spare inventories. In addition, the U.S. Department of Energy’s Industrial Technologies organization has stated that proper R&M can improve energy costs an average of 10 to 15%. For example, if we were to just maintain electric motors, alone, it would yield annual energy savings of up to 122 billion kWh and greenhouse gas emission reductions of over 74 Megatons. When expanded to all maintenance opportunities, the impact is significantly larger.

So, how to we do it? How do we get our arms around this monster known as R&M? It’s actually simpler than you might think. It just requires a little focus, an investment in time and support from the executive on down in the form of a corporate R&M strategy and local tactics.

Concept of a manageable maintenance program
In any sustainable program (refer to the program map in Fig. 1), you must start with knowing what you own. A program cannot be managed if no one knows what there is to manage. Everything must be surveyed and evaluated for its impact. The initial program must start out with a “pilot” area and the expansion of the program must be done in small chunks, as a majority of false starts occur when programs are initiated that take on too much. This process is referred to as the Facility Asset Census (FAC).

Once a census is completed, a Critical Equipment List (CEL) can be developed through selecting equipment based upon specific criteria that must include, at a minimum:

  1. Personnel Safety: If a system were to fail and involve personnel safety, it must be considered a critical system.
  2. Regulatory: If the impact involves regulatory issues such as the environmental systems, it must be considered a critical system.
  3. Production: Systems that impact production must be included. Some analysts will select production equipment based upon its impact on the overall production within a facility. The greater the impact, the higher the ranking. Most companies employ a three-level ranking system, but ranking systems as high as 10 levels are known to be used. Each level indicates the amount of attention the equipment receives.
  4. Cost Impact: If a system surpasses a repair or replacement value cost threshold, it should be considered. The average industrial value for consideration is $25,000.
  5. Other Impacts: Such things as working environment, marketing/sales considerations or other systems deemed important by the organization must be considered. This concept often is at odds with many RCM (Reliability-Centered Maintenance) and similar programs to the detriment of the program.

0908_cas_img1

 

The next step is an Equipment Condition Assessment (ECA) where the condition of critical equipment is evaluated. The tests and inspections may be the ones selected for routine testing through maintenance practice development processes such as RCM. The results should be kept on record and equipment that is in poor condition should be scheduled for repair or replacement, at which time significant energy and reliability improvements can be considered.

As shown in Fig. 1, the ECA should be performed in parallel with a Preventive Maintenance Optimization (PMO) and development of Condition-Based Maintenance (CBM) practices. The PMO process can range from something as simple as a review of the existing processes to eliminate redundancies and outof- date Planned Maintenance (PM) to more advanced commercial PMO processes. In almost every case, from one-third to two-thirds of existing PM procedures can be eliminated or combined. The remaining PMs should be compared to the results of a CBM review involving processes such as RCM or a Maintenance Effectiveness Review (MER).

The MER will involve reviewing the existing testing that is being performed and comparing that to the failure rate and modes of the equipment being evaluated. If the failure rate and modes exist and are as high or higher than they were prior to the application of CBM, improvements to the programs should be considered. The process also involves the opportunity to decrease maintenance, as well as identify new inspections, tests or processes. MERs should be applied periodically—which equipment is included in the MER is generally selected by an experienced RCM analyst or reliability engineer.

Root-Cause-Analysis (RCA) procedures should be determined and personnel trained to ensure that basic RCA can be selected and used by all personnel—and more advanced processes can be used by teams with internal or external facilitation. In either case, all personnel should be made aware of the concepts and application of RCA so that when the process is necessary, the required evidence is maintained.

In addition to the selection of best practice procedures developed around the foregoing processes, other processbased best practices must be investigated and applied.

The impact of a warranty recovery strategy
The silent killer—and opportunity—within many maintenance programs is warranty recovery. With new equipment and repairs, most companies forget to investigate warranty opportunities in failed equipment. For instance, with electric motors, the average motor repair vendor warranty is one year with many repair shops increasing their competitive position by offering warranties as high as five years. New, premium-efficient electric motors have warranties that range from five to seven years.

Part of the reason that both new motor and repair facilities feel comfortable presenting these warranties is that many companies fail to track warranty opportunities. In a great number of facilities, the missed opportunities are not in the thousands of dollars, but actually in the hundreds of thousands or even millions. In one plant, returning sensors that failed during the warranty period saved tens of thousands—every quarter! Tracking warranty dates in CMMS programs or in third-party software can provide immediate impact on the overall maintenance program.

Planning and scheduling
R&M personnel are resources—just like any other corporate resource. It is crucial to optimize the utilization of these resources while still understanding that full utilization (wrench time or local efficiencies) is neither required nor reasonably possible. The concept of trying to control the efficiencies through such practices as scheduling more work than is possible and letting the workforce sort it out amounts to lazy management—but it’s one of the latest fads in industry. In reality, tools that allow proper maintenance process development, optimized workforce, full scheduling with a high rate of completion, ease of training and even the ability to plan for reactive maintenance are readily available.

Currently, the maintenance budget makes up an average of 40% of most operation budgets and the average “wrench time” tends to be in the 20-40% range. This means that per maintenance employee, an organization may be getting only an average of one to four hours of useful work per eight-hour shift. This is not a maintenance issue—it is a management issue that cannot be dealt with by simply laying off personnel.

Planning and scheduling tasks tend to be based upon fixed times in both the internal and contracted maintenance arena. This can lead to inefficient or ineffective use of resources and the decline of the maintenance department toward reactive maintenance, further reducing the efficiency of the program. There are a number of ways to not only ensure proper completion of maintenance tasks—both scheduled and reactive—but also to improve wrench time.

In the production and operations arena, there are a number of ways to schedule production for maximum efficiency. The method for getting the most out of the process is first to determine if the production method is a job shop, batch, assembly line or continuous flow. Once Operations has determined the type of process, scheduling can be performed using simple methods, with unknowns including suppliers and uptime. In fact, some planning methods review production and take into account reduced throughput due to improper maintenance without realizing it. Maintenance differs in that it can be a combination of several systems, as in the following, for example:

  1. Reactive Maintenance (RM): This is a job shop process wherein each repair and return to service is handled on a case-by-case basis.
  2. Preventive Maintenance (PM): Depending on the type of PM, this can be job shop, batch or assembly. and
  3. Predictive/Condition-Based Maintenance (PdM/CBM): These are generally batch or assembly with continuous monitoring falling under continuous flow.

Add in the variables of individual training, experience and aging workforce and it’s easy to see why planning and scheduling can become quite complex. Things can become even more complicated in situations where Production and Operations departments fail to turn over equipment for maintenance. As a result, many planning and scheduling philosophies take the easy way out by promoting the overscheduling of work. This type of approach leads to frustration on the part of the workforce from never being able to catch up on their daily workload. In turn, there tends to be a falling back on performance of activities in the exact amount of time outlined by the task, growing lethargy or even unnecessary overtime to meet PM task completion. In effect, each technician becomes a bottleneck in the maintenance system.

While a complete exercise in planning and scheduling is beyond the scope of this particular article, it is important to note that an effective system for doing so has been available for many decades. Many maintenance personnel with military experience already used this type of process. At the technician level, the maintenance person receives a card that has detailed, stepby- step directions, a list of all materials required, what qualifications are required to perform tasks, amount of time to perform the task and frequency of performance. One of the first things former military maintenance people notice when they enter the civilian maintenance community is the lack of detail in the programs. The result is multiple methods of performing the same task by different personnel. This costs a company an additional opportunity—when performing an RCA on a PM, or attempting to perform an MER, the lack of process often masks the root cause, or may even be the root cause.

Considerations in strategic development of an R&M program
One of the downfalls of most maintenance programs is the lack of a clearly defined corporate strategy. Most often, the “strategy” is presented in terms of “maintenance cost reduction,” which, unfortunately, doesn’t relate to any particular study or measure. For instance, a demand to reduce maintenance costs by 50% doesn’t make any sense if there has been no work to determine what level of maintenance is actually required to keep equipment at the needed level of capacity—or effectiveness.

The actual development of an R&M strategy requires the company to take a long, hard look at the condition of its assets. This look must extend from the capabilities of personnel to vendors, from parts and materials, to safety and regulatory requirements, to measures, to implementation. A company must have a clear-cut vision of what type of availability it is willing to invest in and the gap between present levels and the goal. Something else to consider is the effect of changes to maintenance practices—while some are immediate, most are seen over a long period of time. Any R&M strategy must be planned in the long term, with the negative effects of poor practices being cumulative over time.

Another common error is development of a strategy without input from those affected. A team attempting to develop an effective strategy should include the following individuals or representation of departments:

  • Senior management—not an appointee
  • R&M management
  • R&M technicians
  • Utility or energy management
  • Purchasing
  • Information technology
  • Operations management
  • MRO
  • Others, as necessary

Vendors should not be included in the development of the strategy. The strategic process should result in a corporate goal and vision that must be communicated—in detail. The next step is to require that managers and frontline supervisors develop tactics to implement and meet the strategy of the company.

Final thoughts
Through the recent era of cost-based management, our physical assets have degraded. That’s because the general goal has been to obtain the maximum from an asset without investing in that asset. Our maintenance organizations have been placed in a reactive mode that is not wholly the fault of corporate management. Fault also lies in the complacency of our workforce with only a few “rate-busters” really setting the pace.

The potential impact on the economy from better R&M programs throughout industry is staggering. The local impact, though, is no less dramatic. It includes:

  • Improvements in overall maintenance costs by an average of 24-30%
  • Elimination of unplanned breakdowns by 70-75%
  • Uptime improvements of 35-40%
  • Increase in throughput (capacity improvements) of 20-25%
  • PM elimination of 33-66%
  • Man-hour improvements of 45-50%
  • Decreased energy consumption and related greenhouse gas emissions by more than 10%

All of these benefits can come from the simple task of doing the right maintenance at the right time for the right reasons on the right equipment. MT


Howard W. Penrose, Ph.D., CMRP, president of SUCCESS by DESIGN® Reliability Services, has spent more than 25 years working in the R&M industry, from the shop floor, to academia and the military, to manufacturing. A three-time recipient of General Motors’ “People Make Quality Happen” award, he and his organization specialize in all aspects of reliability and maintenance, from facility to production to product. Among his many achievements, Penrose has authored 13 books, including, most recently, Physical Asset Management For The Executive: Caution Do Not Read This On An Airplane (on which this magazine article is based), and Electrical Motor Diagnostics: 2nd Edition. Telephone: (860) 577-8537; e-mail: howard@motordoc.com

Continue Reading →

Navigation