Blog

27 May

Safety

Comments: No Comments

Applying Predictive Analytics in Safety

In recent years, companies have been generating vast and ever-increasing amounts of data associated with business operations. This trend has led to renewed interest in predictive analytics, a field which focuses on analyzing large data sets to identify patterns and predict outcomes to help guide decision-making. While many leading companies use predictive analytics to identify marketing and sales opportunities, similar data analysis strategies are less common in occupational and process safety. Although the use of predictive analytics is less common in the field of safety, the potential benefits of analyzing safety data are considerable.

Just as companies are currently using customer data to predict customer behavior, safety and incident data can be used to predict when and where incidents are likely to occur. Appropriate data analysis strategies can also identify the key factors that contribute to incident risk, thereby allowing companies to proactively address those factors to avoid future incidents.

Predictive Analytics: In Theory

Let’s take a step back and look at what predictive analytics is and what it does. Predictive analytics is a broad field encompassing aspects of various disciplines, including machine learning, artificial intelligence, statistics, and data mining. Predictive analytics uncovers patterns and trends in large data sets for the purpose of predicting outcomes before they occur. One branch of predictive analytics, classification algorithms, could be particularly beneficial to industry, especially when it comes to avoiding incidents.

Classification algorithms can be categorized as supervised machine learning. With supervised learning, the user has a set of data that includes predictive variable measurements that can be tied to known outcomes. The algorithms identify the relationships between various factors and those outcomes to create predictive rules (i.e., a model). Once created, the model can be given a dataset with predictive variable measurements and unknown outcomes, and will then predict the outcome based on the model rules.

Predictive Analytics: In Practice

Like many in the transportation industry, this railroad had experienced a number of derailments caused by broken rails. Broken rail derailments can have particularly severe consequences, since they typically occur on mainline tracks, at full speed, and with no warning of the impending broken rail. Kestrel was asked to create a predictive model of track-caused derailments on a mile-by-mile basis to identify areas of high broken rail risk so the railroad could target those areas for maintenance, increased inspections, and capital improvement projects.

Penalized Likelihood Logistic Regression

As described above, classification models learn predictive rules in an original data set that includes known outcomes, then apply the learned rules to a new data set to predict outcomes and probabilities. In this case study, Kestrel used a logistic regression modified by Firth’s penalized likelihood method to:

  • Fit the model
  • Identify eleven significant predictive variables (based largely on past incidents)
  • Calculate broken rail probabilities for each mile of mainline track based on track characteristics

Final Model

The final model calculates a predicted probability of a broken rail occurring on each mile of track over a two-year period. The results suggest that the final model effectively predicted broken rail risk, with 33% of broken rails occurring on the riskiest 5% of track miles and 70% occurring in the riskiest 20%. Further, the model shows that the greatest risk reduction for the investment may be obtained by focusing on the 2.5% of track miles with the highest probability of a broken rail. This ability to predict where broken rails are likely to occur will allow the company to more effectively manage broken rail derailment risk through targeted track inspections, maintenance, and capital improvement programs.

Implications for Other Industries

The same general approach described in the above case study can also be applied to other industries—using KPIs to determine predictive variables and incidents as the outcome. The process is as follows:

  • Measurements for defined variables would be taken regularly at each facility or unit. Precision increases as the measurements become more frequent and the observed area (facility/unit) becomes smaller.
  • Once a sufficient number of measurements has been taken, they would then be combined with incident data to provide both the predictive variable measurements and the outcome data needed for training a model. This dataset would be fed into a logistic regression or other classification algorithms to create a model.
  • Once the model has been created, it can be applied to new measurements to predict the probability of an incident occurring at that location during the applicable timeframe.

Once predicted incident probabilities have been found, management would be able to focus improvement resources on those locations that have the highest probabilities of experiencing an incident. The classification algorithms also identify which factors have predictive validity, so management will know how improving those factors will affect the predicted probability of incidents occurring. In other words, they will know which factors have the strongest relationship with incidents and can focus on improving those first.

Data-Driven Decisions

Industrial companies are generating and recording unprecedented amounts of data associated with operations. Those that strive to be best-in-class need to use that data intelligently to guide future business decision-making.

The versatility of predictive analytics, including the method described in this case study, can be applied to help companies analyze a wide variety of problems. In this way, companies can:

  • Explore and investigate past performance
  • Gain the insights needed to turn vast amounts of data into relevant and actionable information
  • Create statistically valid models to facilitate data-driven decisions
22 May

Safety

Comments: No Comments

Q&A: The New ISO 45001 Standard

What is ISO 45001?

ISO 45001 is a new international standard created by the International Organization for Standardization (ISO) that specifies requirements for an occupational, health & safety management system (OHSMS). It provides a framework for managing the prevention of death, work-related injury, and work illnesses. The ultimate goal of the standard is to help organizations proactively improve OHS performance and create a safe and healthy workplace.

Note that ISO 45001 provides guidance. It does not state specific criteria for OHS performance, nor is it prescriptive about the OHSMS design. It is a management tool for voluntary use by organizations to minimize OHS risks.

Why is ISO 45001 necessary?

There are several reasons why the creation of an international standard to manage OHS performance is necessary:

  • First and foremost, organizations are responsible for minimizing the risk of harm to all individuals that may be impacted by their activities. The standard aims to protect human lives by encouraging organizations to create a safer, healthier workplace.
  • According to the International Labour Organization (ILO), there were 2.34 million deaths worldwide in 2013 as a result of worker activities. The greatest majority (2 million) are associated with health issues, as opposed to injuries. The economic burden associated with this number of occupational injuries and illnesses is significant. Organizations must manage all their risks—including OHS—to survive. Poor OHS management can result in loss of key employees, business interruption, claims, higher insurance premiums, regulatory action, reputational damage, loss of investors, and loss of business.
  • Finally, increased globalization creates new OHS challenges. ISO 45001 is an international standard that promotes global conformity.

What are the key aspects of ISO 45001?

Many of the elements of ISO 45001 are the same or similar to those found in OSHAS 18001. However, there are additions and changes in ISO 45001 that differentiate the new standard.ISO 45001 Hierarchy of Controls

ISO 45001 establishes new roles for the organization’s people. First, it emphasizes worker participation in the OHSMS. This includes ensuring that workers are competent and have the appropriate skills to safely perform their tasks. Second, the role of top management is different than in OHSAS 18001. Of note, a designated Management Representative is no longer required; however, those individuals in management roles are expected to take ownership and demonstrate a commitment to OHS through leadership. Top management must demonstrate direct involvement and engagement with the OHSMS by:

  • Ensuring the organization’s OHS policy and objectives are compatible with the overall strategic direction of the organization
  • Integrating OHSMS processes and requirements into business processes
  • Developing and promoting an OHS culture that supports the OHSMS
  • Being accountable for the OHSMS’s effectiveness

In addition to people, ISO 45001 follows a risk-based approach that advocates prevention. This requires identifying activities that could harm those working on behalf of the organization. A large part of this involves understanding the “context” of the organization, another new element of ISO 45001. Organizations must be able to identify all external and internal factors that have the potential to impact OHS management objectives and results.

To address risks and opportunities, there are new clauses related to hazard identification, as well. As with other sections of the standard, hazard identification becomes a process rather than a procedure and, importantly, considers all individuals near the workplace who may be impacted by the organization’s activities. ISO 45001 further outlines a more defined hierarchy for organizations to determine appropriate controls.

How does ISO 45001 fit in with other ISO standards and management system approaches?

ISO 45001 follows the same high-level management system approach being applied to other ISO management system standards (e.g., ISO 14001 and ISO 9001)—Annex SL. Because of this, the ISO 45001 requirements should be consistent with the other standards to allow for relatively easy alignment and integration into the organization’s overall management processes.

In addition, ISO 45001 takes into account other OHS standards, including OHSAS 18001, ILO-OSH Guidelines, various national standards, and the ILO’s international labor standards and conventions.

What is Annex SL?

As mentioned above, Annex SL is the structure for all new and revised ISO standards. It defines the framework for a generic management system—and is then customized for each discipline. This standard structure allows for easier integration between management systems and improved efficiencies. The major clauses for all ISO management system standards are identical under Annex SL and fall into the Play-Do-Check-Act (PDCA) cycle. Organizations who have already implemented ISO 9001:2015 or ISO 14001:2015 will be familiar with the Annex SL structure.

The table below outlines the main clauses in Annex SL, as well as the OHSMS-specific clauses. Highlighted areas indicate those sections that are significant changes/additions to the existing OHSAS 18001 standard.ISO 45001 Table

What does this mean for OHSAS 18001?

As outlined in the table above, ISO 45001 does not conflict with OHSAS 18001. In fact, it expands and enhances the existing standard to improve integration of the OHSMS into the overall business. ISO 45001 is intended to replace OHSAS 18001. Much like other management system standards, current users of OHSAS 18001 will need to update their systems according to the requirements of the new standard within a three-year transition period.

Who should use ISO 45001?

The short answer is everyone. ISO 45001 is designed to be a flexible management system that can be implemented by any organization, no matter the size, type, or industry. As long as the organization has people who may be affected by its activities, an OHSMS has value in ensuring worker health and safety and fulfilling legal requirements.

Why should I do this? Why are management systems like ISO 45001 beneficial?

A management system is an organizing framework that enables companies to achieve and sustain their operational and business objectives through a process of continuous improvement. A management system is designed to identify and manage risks through an organized set of policies, procedures, practices, and resources that guide the enterprise and its activities to maximize business value.

What do I do next?

  • Get informed! Start reading up on ISO 45001 to get familiar with how the new standard is structured.
  • Identify gaps in your existing OHSMS that will need to be addressed to meet any new requirements. If you don’t have an existing OHSMS, review the requirements and determine what pieces you may already have in place.
  • Develop an implementation plan. There is a three-year transition period. Plan according to this timeline.
  • Provide training. It is vital to ensure that workers and management are engaged in the OHSMS and that they are competent in any new skills/responsibilities that may be required.
  • Put your plan into action. Update/develop your OHSMS to meet the ISO 45001 requirements and provide verification of its effectiveness to ensure certification.
19 May
Technology Tip: Software and Audits Top 10

All types of business and operational processes demand a variety of audits and inspections to evaluate compliance with standards—ranging from government regulations to industry codes, to system standards (i.e., ISO), to internal corporate requirements.

Audits provide an essential tool for improving and verifying compliance performance. Audits may be used to capture regulatory compliance status, management system conformance, adequacy of internal controls, potential risks, and best practices.

By combining effective auditing program design, standardized procedures, trained/knowledgeable auditors, and computerized systems and tools, companies are better able to capture and analyze audit data, and then use that information to improve business performance. Having auditing software of some sort can greatly streamline productivity and enhance quality, especially in industries with many compliance obligations.

The following tips can help ensure that companies are getting the most out of their auditing process:

  1. Have a computerized system. Any system is better than nothing; functional is more important than perfect. The key is to commit to a choice and move forward with it. Companies are beginning to recognize the pitfalls of “smart people” audits (i.e., an audit conducted by an expert + notebook with no protocols or systems). While expertise is valuable, this approach makes it difficult to compare facilities and results, is not replicable, and provides no assurance that everything has been reviewed. A defined system and protocol helps to avoid these pitfalls.
  1. Invest time before the audit. The most important time in the audit process is before the audit begins. Do not wait until the day before to prepare. There is value in knowing the scope of the audit, understanding expectations, and developing question sets/protocol. This is also the time to ensure that the system collects the data desired to produce the final report.
  1. Capture data. Data is tangible. You can count, sort, compare and organize data so it can be used on the back end. Data allows the company to produce reports, analytics, and standard metrics/key performance indicators.
  1. Don’t forget about information. Information is important, too. The information provides descriptions, directions, photos, etc. to support the data and paint a complete picture.
  1. Be timely. Reports must be timely to correct findings and demonstrate a sense of urgency. Reports serve as a permanent record and begin the process of remediation. The sooner they are produced, the sooner corrective actions begin.
  1. Note immediate fixes. During the audit, there may be small things uncovered that can be fixed immediately. These items need to be recorded even if they are fixed during the audit. Unrecorded items “never happened”. Correspondingly, it is important to build a culture where individuals are not punished for findings, as this can result in underreporting.
  1. Understand the audience. Who will be reading the final report? What do they need to know? What is their level of understanding? Not all data presentation is useful. In fact, poorly presented data can be confusing and cause inaction. It is important to identify key data, reports desired, and the ways in which outputs can be automated to generate meaningful information.
  1. Compare to previous audits. The only way to get an accurate comparison is if audits have a common scope and a common checklist/protocol. Using a computerized system can ensure that these factors remain consistent. Comparisons reinforce and support a company’s efforts to maintain and improve compliance over time.
  1. Manage regulatory updates. It is important to maintain a connection to past audits and the associated compliance requirements at the time of the audit. Regulations might change and that needs to be tracked. Checklists, however, may remain the same. Companies should have a process for tracking regulatory updates and making sure that the system is updated appropriately.
  1. Maintain data frequency. For data, the frequency is key. Consider what smaller scope, higher frequency audits look like. These can allow the company to gather more data, involve more people, and improve the overall quality and reliability of reports.

A well-designed and well-executed auditing program—with analysis of audit data—provides an essential tool for improving and verifying business performance. Audits capture regulatory compliance status, management system conformance, adequacy of internal controls, potential risks, and best practices. And using a technology tool or system to manage the audit makes that information even more useful.

09 May

Safety

Comments: No Comments

Final Rule: Walking-Working Surfaces

OSHA has issued a final rule updating its general industry Walking-Working Surfaces standard to protect workers from slip, trip, and fall hazards. The rule also increases consistency in safety and health standards for people working in both general and construction industries.

The final rule’s most significant update is allowing employers to select the fall protection system that works best for them, choosing from a range of accepted options including personal fall protection systems.

OSHA estimates the final rule will prevent more than 5,800 injuries a year. The rule takes effect Jan. 17, 2017.

Read the full press release.

06 May
Case Study: Efficient Compliance Management

Regulatory enforcement, customer and supply chain audits, and internal risk management initiatives are all driving requirements for managing regulatory obligations. Many companies—especially those that are not large enough for a dedicated team of full-time EHS&S staff—struggle with how to effectively resource their regulatory compliance needs.

The following case study talks about how The C.I. Thornburg Co., Inc. (C.I. Thornburg) is using a technology tool to efficiently meet National Association of Chemical Distributors (NACD) and a number of other regulatory requirements.

The Challenge of Compliance

C.I. Thornburg joined NACD in January 2015. As a condition of membership, the company started the process of developing and implementing Responsible Distribution in April 2015. Responsible Distribution showcases member companies’ commitment to continuous improvement in every business process of chemical distribution—and it requires rigorous management activities to develop and maintain.

With an EHS&S department of one, managing all of those activities was a challenge for C.I. Thornburg. The company was looking for a way to streamline the process and more effectively manage Responsible Distribution requirements and regulatory compliance obligations.

Code & Compliance Elite™

C.I. Thornburg brought on Kestrel to initially help the company achieve Responsible Distribution verification. Kestrel worked with C.I. Thornburg to customize and implement Code & Compliance Elite (CCE™), an easy-to-use technology tool designed to effectively manage management system and verification requirements. Kestrel tailored the CCE™ application specifically for C.I. Thornburg to provide:

  • Document management – storage, access, and version control
  • Mobile device access
  • Regulatory compliance management and compliance obligation calendaring
  • Internal audit capabilities
  • Corrective and preventive action (CAPA/CPAR) tracking and management
  • Task and action management

CCE™ played a large role toward the end of C.I. Thornburg’s Responsible Distribution implementation, particularly with document control and organization, and in the verification audit. During verification, documents could be quickly referenced because of how they are organized in CCE™, making the process very efficient. According to C.I. Thornburg Director of Regulatory Compliance and EHS&S Richard Parks, “The verifier was blown away by how well we were organized and how the tool linked many documents from different regulatory policies.” The company achieved verification in May 2016.

Broadening to Other Regulatory Requirements

CCE™ is still being used to manage Responsible Distribution requirements, but C.I. Thornburg is now working with Kestrel to expand it to all regulatory branches that govern the business. Regulatory requirements function similarly—for example, Responsible Distribution has 13 codes, Department of Homeland Security (DHS) has 18 performance standards (RBPS), and OSHA PSM has 14 elements. All require internal audits and corrective action tracking—things that can be easily and effectively managed through CCE™ to create a one-stop shop for regulatory compliance. Kestrel is currently developing the DHS and PSM modules in CCE™ for C.I. Thornburg.

Valuable Management Tool

CCE™ is providing C.I. Thornburg with a valuable management tool that automates the regulatory landscape. According to Parks, as a small organization that depends on using efficient tools to manage compliance rather than adding more manpower, CCE™ has provided huge cost savings and tremendous value for the organization, including the following:

  • CCE™ has become the ultimate tool inefficiency. Tasks that used to take hours to complete are now easily done in just minutes.
  • The internal audit function of CCE™ makes audits seamless and tracking and follow-up easy.
  • The CAPA tool ensures that the company is managing corrective actions and completing follow-up activities and tasks.
  • The functionality of CCE™ allows for managing multiple regulatory dashboards, providing a one-stop shop for managing regulatory compliance obligations.
  • CCE™ creates an organized document structure that enables easy access to information and quick response to auditors.
  • During Senior Management Review, senior managers see the benefit of being able to reference the history of corrective actions and audits through CCE™.

“A lot of NACD member companies are small organizations that have limited resources to effectively manage all EHS&S needs,” said Parks. “CCE™ really creates the department and is a huge value to small businesses. With the CCE™ technology and a company’s clearly defined goal, Kestrel can provide an efficient solution to most any need.”

05 May

Safety

Comments: No Comments

Applying Predictive Analytics to Leading Indicators

Leading indicators can be defined as safety-related variables that proactively measure organizational characteristics with the intention of predicting and, subsequently, avoiding process safety incidents. Leading indicators become especially powerful when combined with advanced statistical methods, including predictive analytics.

Case Study

Kestrel developed a major incident predictive analytical model for the transportation industry that is also applicable to the process industries. Using regularly updated inspection data, the model was created to provide major incident probabilities for each transportation segment over a six-month period.

Additionally, the model identifies the variables that are significantly contributing to major incidents, thereby showing the company which factors to address to prevent future incidents. Model validation revealed that it could successfully predict the location and time frame of 75% of major incidents.

Broader Applicability

Companies in the process industries are generating and recording unprecedented amounts of data associated with operations. Companies that strive to be best-in-class need to use that data intelligently to guide future business decision-making.

The versatility of predictive analytics, including the method described in this case study, can be applied to help companies analyze a wide variety of problems. In this way, companies can:

  • Explore and investigate past performance
  • Gain the insights needed to turn vast amounts of data into relevant and actionable information
  • Create statistically valid models to facilitate data-driven decisions

Join Kestrel at the 2016 International Symposium

Kestrel’s William Brokaw will be presenting the case study discussed above on Tuesday, October 25 at 1:15 p.m. at the Mary Kay O’Connor Process Safety Center 2016 International Symposium: Applying Predictive Analytics to Process Safety Leading Indicators.

MKOPSC 2016 International Symposium
October 25-27, 2016
Hilton Conference Center
College Station, Texas

Kestrel’s experts will also be on hand throughout the Symposium to talk with you. Stop by and see us at our booth. We welcome the opportunity to learn more about your needs and to discuss how we help our chemical and oil & gas clients manage environmental, safety, and quality risks; improve safety performance, and achieve regulatory compliance assurance.

04 May
Frank R. Lautenberg Chemical Safety Act

Last year, we came to you with breaking news about Toxic Substances Control Act (TSCA) reform taking hold, as the U.S. House of Representatives passed the TSCA Modernization Act of 2015 (H.R. 2576) on June 23, 2015.

Almost one year later—and approximately 40 years since the Act’s inception—President Obama signed the Frank R. Lautenberg Chemical Safety Act (FRL-21) into law on June 22, 2016, amending the nation’s primary chemical management law. A historic bipartisan achievement, this Act gives the USEPA immediate authority to begin evaluating the risk of any chemical it designates as “high priority”.

Background

TSCA was developed to ensure that products are safe for intended use by providing the USEPA authority to review and regulate chemicals in commerce. Despite its intention, TSCA has proven to be rather ineffective in providing adequate protection and in facilitating U.S. chemical manufacturing and use. More than 80,000 chemicals available in the U.S. have never been fully tested for their toxic effects on health and the environment. In fact, under TSCA, the USEPA has only banned five chemicals since 1976.

According to a blog by USEPA Administrator Gina McCarthy, “While the intent of the original TSCA law was spot-on, it fell far short of giving EPA the authority we needed to get the job done.”

And that is where FRL-21 takes over, strengthening the foundation built by TSCA to ensure that chemical safety remains paramount.

Key Changes

FRL-21 remains consistent with the 2009 Principles for TSCA Reform. The USEPA outlines the following key regulatory changes in its Q&A briefing on the Act.

Evaluates the safety of existing chemicals in commerce, starting with those most likely to cause risks. This is the first time that all chemicals in commerce will undergo risk-based review by the USEPA. The Agency is charged with creating a risk-based process to determine which chemicals should be prioritized for assessment. High-priority chemicals may present an unreasonable risk to health or the environment due to potential hazard and route of exposure. A high-priority designation, in turn, triggers a risk evaluation to determine the chemical’s safety. This prioritization ensures that those chemicals that present the greatest risk will be reviewed first.

Evaluates new and existing chemicals against a new risk-based safety standard. Under the law, the USEPA will evaluate chemicals based purely on the health and environmental risks they pose. The evaluation must also include considerations for vulnerable populations (e.g., children, elderly, immune-compromised). FRL-21 further repeals the requirement that the Agency apply the least burdensome means of adequately protecting against unreasonable risk from chemicals. Costs and benefits will not be factored into the evaluation.

Empowers USEPA to require the development of chemical information necessary to support these evaluations. In short, the Agency has expanded authority to demand additional health and safety or testing information from manufacturers and/or to conduct risk evaluations on a chemical. USEPA may also expedite the process through new order and consent agreement authorities.

Enforces clear and enforceable deadlines that ensure timely review of prioritized chemicals and timely action on identified risks. Strict deadlines are designed to keep the USEPA’s work on track and to ensure compliance by manufacturers. For example, the Agency must have 10 ongoing risk evaluations within the first 180 days and 20 ongoing risk evaluations within 3.5 years. When unreasonable risks are identified, USEPA must then take final risk management action within two years. Action, which may include labeling, bans, and phase-outs, must begin no later than five years after the final regulation.

Increases public transparency of chemical information by limiting unwarranted claims of confidentiality. The USEPA must review and make determinations on all new confidentiality claims for chemical identity, as well as review past confidentiality claims to determine if they are still warranted. This will allow companies to preserve their intellectual property and competitive advantage, while still providing transparency to the public.

Provides a source of funding for the USEPA to carry out these changes. The USEPA can collect up to $25 million annually in user fees from chemical manufacturers and processors when they:

  • Submit test data for USEPA review
  • Submit a pre-manufacture notice for a new chemical
  • Manufacture or process a chemical that is the subject of a risk evaluation
  • Request that the USEPA conduct a chemical risk evaluation

Impacts

For companies, the most immediate impacts of FRL-21 will be on the new chemicals review process, as the USEPA has to approve any new chemical or significant new use of an existing chemical before manufacturing can commence and chemicals can enter the marketplace. This process will help provide regulatory certainty throughout the supply chain—from raw material produces to retailers. And, in the end, the risk evaluations will help ensure that manufacturers are able to bring new chemicals to the market in a safe and efficient way.

As for the general public, FRL-21 creates a new standard of safety to protect the public and the environment from unreasonable risks associated with chemical exposure. For the first time in 40 years, it provides assurance and greater confidence that chemicals are being used safely.

30 Apr
Management System Internal Audit: What to Expect

Many companies face requirements to conduct management system internal audits. And many probably consider it to be one of those “necessary evils” of doing business. In reality, an internal audit can be a great opportunity to uncover issues and resolve them before an external audit begins. An internal audit can sometimes even enable more improvements than an external audit because it allows the company to review processes more often and more thoroughly. So what, exactly, goes into an internal audit?

What Is an Audit?

First, conducting a management system internal audit encompasses all of the efforts to gather, accumulate, arrange, and evaluate data so that there is sufficient information to arrive at an audit opinion. According to the ANSI/ASQC Standard Q1-1986 Generic Guidelines for Auditing Management Systems, an audit is:

a systematic examination of the acts and decisions by people with respect to Q/EHS issues, in order to independently verify or evaluate and report conformance to the operational requirements of the program or the specification or contract requirement of the product or service.

Internal audits should be carried out to look for areas for improvement and best practices. In an internal audit, the auditor is evaluating, verifying, and reporting conformance or non-conformance in terms of related documentation. The auditor assesses systems, processes, and products against the related documentation:

  • Systems are compared against company directives and requirements.
  • Processes are compared against procedures, process charts, and work instructions.

The auditor examines where and how “operational requirements of the management system” are described. This is done by reviewing each policy, procedure, work instruction, checklist, and form looking for each “actionable item” listed within.

The Interview

The auditor will go out into the workforce and ask the prepared questions to various employees.  Based on the responses given, the auditor may need to ask follow-up questions to get a clear understanding of how an operation works. Questions asked by auditors are generally open-ended to give the auditee the opportunity to elaborate. The auditor’s goal is to give the employee the opportunity to think prior to answering and to follow the audit trail wherever it leads—within or outside of the department.

Tangible Evidence

In order for an internal audit to support improvement steps, the auditor will seek tangible evidence. For example, work instructions require that inspections are completed every day, but the checklist shows that no checks have been performed for the last week. Tangible evidence may include taking a photo copy of the checklist to document this issue.

Evaluating Internal Controls

During the audit, the auditor is looking for internal controls that regulate an operation. There are seven steps in evaluating internal controls:

  1. Observe the Operation: The auditor needs to understand what processes and systems to review, where they are located, and who is responsible for them.
  2. Identify Constraints: The auditor will identify constraints to the extent possible, such as:
    • Scattered information
    • Internal opposition
    • Process not capable
    • Process not in control
    • Unavailable information
  3. Evaluate Risk: The auditor will assess the importance and risk of internal controls not detecting and preventing non-conformances. The auditor will ask personnel being audited and management if there is anything more that could be done to identify and control risk.
  4. Evaluate the Internal Control Structure: Usually extensive internal controls exist, operate properly, and maintain/improve the process; however, this may not be an accurate assumption. Controls may not exist, may be weak, or may control and measure unimportant variables. It is very important for the auditor to resist assuming that the way an existing system has been set up is the correct way to do something. Auditors should challenge how and why something is being done to encourage system improvements.
  5. Test the Effectiveness of the Internal Control Structure: Gathering evidence is the process of collecting data and information critical to support a decision or judgment rendered by the auditor.
  6. Evaluate Evidence: Once evidence has been gathered from interviews, observations, or records, the auditor must distill and summarize the data into useful information for the company. The evidence is then reviewed to determine whether systems and controls are working effectively.
  7. Issue an Opinion: When all is said and done, the auditor must issue an opinion of conformance or non-conformance. In a deficiency finding (non-conformance), the audit report will clearly state that there is a variance between what is and what should be. All evidence findings should be listed to support this conclusion.

Clarify Issues and Non-Conformances

Upon completion of an audit, there may be times when clarification of an issue or concern will be warranted.  This is when the auditor may go back to the department head and review the current understanding of the audit results. The department head should have ample time to discuss and clarify any issues of concern.

Any outstanding issues that warrant a non-conformance report should be discussed to ensure that the company understands: 1.) why the issue is considered a non-conformance, and 2.) what may need to be done to rectify the situation. It is important to also discuss all positive findings from the audit to leverage best practices.

By using an internal audit to actually improve operations—and not just as another requirement to fulfill—companies can realize significant value through:

  • Meeting regulatory/certification requirements prior to the external audit
  • Improving operational controls and processes
  • Enhancing overall management system effectiveness
23 Apr
Kestrel to Present at the AFPM Annual Meeting

Join Kestrel at the AFPM Annual Meeting to hear William Brokaw present his paper, Using a Data-Driven Method of Accident Analysis: A Case Study of the Human Performance Reliability (HPR) Process.

AFPM 2016 Annual Meeting
March 13-15, 2016
Kestrel Presentation: March 14 at 3:30 p.m.
Hilton San Francisco Union Square
San Francisco, CA

The Role of Human Error in Occupational Incidents

The concept of human error and its contribution to occupational accidents and incidents have received considerable research attention in recent years. When an accident/incident occurs, investigation and analysis of the human error that led to the incident often reveals vulnerabilities in an organization’s management system.

This recent emphasis on human error has resulted in an expansion of knowledge related to human error and the most common factors contributing to incidents. Kestrel’s Human Performance Reliability (HPR) process helps to classify human error—with the additional step of associating the control(s) that failed to prevent the incident from occurring. This process allows organizations to identify how and where to focus resources to drive safety performance improvements.

In this presentation, Will describes Kestrel’s method for identifying the most frequent human errors and most problematic controls and presents a case study wherein HPR was applied to a large petroleum refining company.

Catch Up with Kestrel

In addition to the presentation on March 14, Kestrel’s experts will also be on hand throughout the Annual Meeting to talk with you. We welcome the opportunity to learn more about your needs and to discuss how we help our chemical and oil & gas clients manage environmental, safety, and quality risks; improve safety performance, and achieve regulatory compliance assurance

19 Apr

Safety

Comments: No Comments

Managing Human Error to Improve Safety Culture

The concept of human error and its contribution to occupational accidents and incidents have received considerable research attention in recent years. As mechanical systems become safer and more reliable, human error is more frequently being identified as the root cause of or a contributing factor to an incident (Health and Safety Executive, 1999). In order to effectively manage human error, companies must understand not only human error but also the factors contributing to it.

Kestrel has found that a multi-pronged improvement plan can help companies reduce the risks associated with employee and contractor behavior and, as a result, improve the safety performance of the organization. The three primary components of this approach include the following:

  1. Incident investigation and analysis – adapted from the Human Factors Analysis and Classification System (HFACS)
  2. Human Reliability Analysis (HRA) – based on the Cognitive Reliability and Error Analysis Method (CREAM)
  3. Comprehensive safety culture assessment and improvement initiative

Incident Investigation

Incident investigation and analysis is based on the premise that employee and contractor performance is a significant source of risk within any organization. The majority of accidents and other unintended events are, at least in part, the result of human error. Companies manage risks associated with employee and contractor behavior through a variety of controls (i.e., policies, standards, procedures) that address employee selection, training, supervision, operating practices, corrective and preventive actions, etc. Accidents occur when there is a failure in one or more of these controls.

The Human Factors Analysis and Classification System (HFACS; Wiegmann & Shappell, 2003) is very helpful for identifying human errors that contribute to a single incident and for helping to guide the appropriate corrective action. However, it doesn’t help companies identify the controls (e.g., engineered, administrative, PPE) that are most often failing to prevent incidents. Additionally, it is not designed for the aggregation of multiple incident analyses for the purposes of analyzing trends, similarities, and the statistical significance of the results.

So while the HFACS framework can be used to identify and classify human error(s) that contributed to the incident in question, the next steps are to 1.) identify and document the control(s) that failed to prevent each human error and 2.) describe the unique circumstances of the incident that were classified into that HFACS category. When aggregated, an incident analysis results in:

  • A list of the most frequently occurring human factors, which are ranked according to their statistical significance
  • Identification of the controls that are most frequently identified as failing to prevent the incidents in question
  • A list of the specific circumstances associated with each error identification, to look for commonalities when planning systemic, rather than local, corrective action

This provides the company with the ability to identify where to focus corrective resources and how to best deploy those resources.

Human Reliability Analysis (HRA) and CREAM

There may be times when it is still difficult to create action plans to address the problematic controls; subsequently, a deeper analysis of the control is necessary in order to improve it. When this happens, Human Reliability Analysis (HRA) methods, specifically CREAM, help to further analyze the control.

HRA methods provide a detailed analysis of the potential for human error within a given process by observing the process step-by-step and evaluating the type(s) and the likelihood of error(s) that could occur at each step. The CREAM methodology, developed by Erik Hollnagel, focuses on the importance of cognition when attempting to identify, evaluate, and interpret potential human error.

Specifically, the CREAM method provides a framework for:

  1. Identifying the potential for human error in a process
  2. Describing the likelihood and nature of that error
  3. Evaluating if the potential for error requires action or if the existing risk is at an acceptable level

When the analysis is complete, it becomes possible to discuss viable options for deploying corrective action to improve the process (if necessary). These corrective actions can focus on the person, the operating environment, and/or the equipment involved in the process.

Safety Culture

Effective incident investigation and analysis and HRA function most effectively when a company exhibits an excellent safety culture. A strong safety culture has a number of characteristics in common. Kestrel’s research into the topic of safety culture has identified two traits that are particularly important to an effective safety culture: leadership and employee engagement. Best-in-class safety cultures have robust systems in place to ensure that each of these traits, among others, is mature, well-functioning, and fully ingrained into the standard practices of the organization.

Assessing safety culture can be done by administering a safety culture survey, conducting interviews of key leadership and safety personnel, and leading focus groups with front-line employees and supervisors. The mix of quantitative data (survey) and qualitative information (interviews and focus groups) provides data that can then be statistically analyzed, as well as a rich context for the results of the statistical analysis.

Performing a safety culture survey also provides an “as-is” benchmark for comparing future survey results to determine if improvement efforts have been effective and have fully permeated into all levels and units across the organization.

Realizing the Richest Benefit

While the individual components discussed above can be very helpful to a company, deploying them in tandem provides the richest and most comprehensive benefit to company safety performance.HPR Cycle

That is because the three components are inherently complementary. Each improves the effectiveness of the others. For example, safety culture improvements, specifically, improvements in mutual trust and respect between levels of the organization, lead to better incident investigation data. This is because employees feel free to provide honest and complete narratives of the incident since they know they will not be unfairly disciplined for what happened. As a result, incident investigation and analysis is better able to identify the human errors and, most importantly, the controls that are most often involved in incidents.

All of this then allows the company to identify the processes and procedures that may be appropriate candidates for HRA. Subsequently, corrective actions that result from both incident investigation/analysis and HRA demonstrate to employees that management is committed to continuous safety improvement, which further improves safety culture.

Sidebar: