Computer Science, asked by Mahesh2096, 1 year ago

Briefly discuss the commonly used methods in the validation process.

Answers

Answered by Jyotimodi
0

There are many statistical tools that can be used as part of validation. Control charts, capability studies, designed experiments, tolerance analysis, robust design methods, failure modes and effects analysis, sampling plans, and mistake proofing are but a few. Each of these tools will be summarized and their application in validation described.

1. INTRODUCTION

Validation requires documented evidence that a process consistently conforms to requirements. It requires that you first obtain a process that can consistently conform to requirements and then that you run studies demonstrating that this is the case. Statistical tools can aid in both tasks.

2. USES OF THE TOOLS

This section describes the many contributions that statistical tools can make to validation. Each tool appearing in bold is further described in Section 4.

One tool that is particularly useful in organizing the overall validation effort is a failure modes and effects analysis (FMEA) or a closely related fault tree analysis (FTA). An FMEA involves listing out the potential problems or failure modes and evaluating their risk in terms of their severity, likelihood of occurring and ease of detection. Where potential risks exist, the FMEA can be used to document which failure modes have been addressed and which still need to be addressed. As each failure mode is addressed, the controls established are documented. The end result is a control plan. Addressing the individual failure modes will require the use of many different statistical tools.

Failures or nonconformities occur because of errors made and because of excessive variation. Obtaining a process that consistently conforms to requirements requires a balanced approach using both mistake proofing and variation reduction tools. When a nonconformance occurs because of an error, mistake proofing methods should be used. Mistake proofing attempts to make it impossible for the error to occur or at least to go undetected.

However, many nonconformities are not the result of errors, instead, they are the result of excessive variation and off-target processes. Reducing variation and proper targeting of a process requires identifying the key input variables and establishing controls on these inputs to ensure that the outputs conform to requirements. Strategies and tools for reducing variation and optimizing the process average are described in Section 3.

The end result is a control plan. The final phase of validation requires demonstrating that this control plan works, i.e., that it results in a process that can consistently conform to requirements. One key tool here is a capability study. A capability study measures the ability of the process to consistently meet the specifications. It is appropriate for measurable characteristics where nonconformities are due to variation and off-target conditions. Testing should be performed not only at nominal, but also under worst-case conditions. When pass/fail data is involved, acceptance sampling plans can be used to demonstrate conformance to specifications. Finally, in the event of potential errors, challenge tests should be performed to demonstrate that mistake proofing methods designed to detect or prevent such errors are working.

Depending on circumstances, not all tools need be used, other tools could be used instead and the application of the tools can vary.

Similar questions