My Reading on ASQ CQA HB Part v Part 1

Page 1

My Reading on ASQ CQA The Handbook ½ of Part V (VA-VC) My Pre-exam Self Study Notes, 14.7%. 29th September 2018 – 8th Oct 2018

Charlie Chong/ Fion Zhang


Industrial Robotic

Charlie Chong/ Fion Zhang


The Magical Book of CQA

Charlie Chong/ Fion Zhang


闭门练功

Charlie Chong/ Fion Zhang


Charlie Chong/ Fion Zhang


Fion Zhang at Heilongjiang 29th September 2018

Charlie Chong/ Fion Zhang


ASQ Mission: The American Society for Quality advances individual, organizational, and community excellence worldwide through learning, quality improvement, and knowledge exchange.

Charlie Chong/ Fion Zhang


BOK Knowledge

Percentage Score

I. Auditing Fundamentals (30 Questions)

20%

II. Audit Process (60 Questions)

40%

III. Auditor Competencies (23 Questions)

15.3%

IV. Audit Program Management and Business Applications (15 Questions)

10%

V. Quality Tools and Techniques (22 Questions)

14.7%

150 Questions

100%

https://asq.org/cert/resource/docs/cqa_bok.pdf

Charlie Chong/ Fion Zhang


Part V

Part V Quality Tools and Techniques [26 of the CQA Exam Questions or 14.7 percent] _____________________________________________________ Chapter 18 Basic Quality and Problem- Solving Tools/Part VA Chapter 19 Process Improvement Techniques/Part VB Chapter 20 Basic Statistics/Part VC Chapter 21 Process Variation/Part VD Chapter 22 Sampling Methods/Part VE Chapter 23 Change Control and Configuration Management/Part VF Chapter 24 Verification and Validation/Part VG Chapter 25 Risk Management Tools/Part VH

Charlie Chong/ Fion Zhang


Part V

Auditors use many types of tools to plan and perform an audit, as well as to analyze and report audit results. An understanding of these tools and their application is essential for the performance of an effective audit since both auditors and auditees use various tools and techniques to define processes, identify and characterize problems, and report results. An auditor must have sufficient knowledge of these tools in order to determine whether the auditee is using them correctly and effectively. This section provides basic information on some of the most common tools, their use, and their limitations. For more in- depth information on the application of tools, readers should consult an appropriate textbook.

Charlie Chong/ Fion Zhang


Part VA

Chapter 18 Basic Quality and Problem- Solving Tools/Part VA __________________________________________________

Pareto Charts Pareto charts, also called Pareto diagrams or Pareto analysis, are based on the Pareto principle, which suggests that most effects come from relatively few causes. As shown in Figure 18.1, a Pareto chart consists of a series of bars in descending order. The bars with the highest incidence of failure, costs, or other occurrences are on the left side. The miscellaneous category, an exception, always appears at the far right, regardless of size. Pareto charts display, in order of importance, the contribution of each item to the total effect and the relative rank of the items. Pareto charts can be used to prioritize problems and to check performance of implemented solutions to problems. The Pareto chart can be a powerful management tool for focusing effort on the problems and solutions that have the greatest payback. Some organizations construct year- end Pareto diagrams and form corporate improvement teams in the areas determined to be in need of the greatest attention.

Charlie Chong/ Fion Zhang


Figure 18.1 SQM software example of a frequency Pareto analysis.

Part VA

80%

Defects

Charlie Chong/ Fion Zhang


Part VA

Pareto Analysis- vital few and the trivial many. Pareto charts, also called Pareto diagrams or Pareto analysis, are based on the Pareto principle, which suggests that most effects come from relatively few causes. BREAKING DOWN 'Pareto Analysis' In 1906, Italian economist Vilfredo Pareto discovered that 80% of the land in Italy was owned by just 20% of the people in the country. He extended this research and found out that the disproportionate wealth distribution was also the same across all of Europe. The 80/20 rule was formally defined as the rule that the top 20% of a country‘s population accounts for an estimated 80% of the country‘s wealth or total income. Joseph Juran, a Romanian-American business theorist stumbled on Pareto‘s research work 40 years after it was published, and named the 80/20 rule Pareto‘s Principle of Unequal Distribution. Juran extended Pareto‘s Principle in business situations to understand whether the rule could be applied to problems faced by businesses. He observed that in quality control departments, most production defects resulted from a small percentage of the causes of all defects, a phenomenon which he described as ―the vital few and the trivial many.‖ Following the work of Pareto and Juran, the British NHS Institute for Innovation and Improvement provided that 80% of innovations comes from 20% of the staff; 80% of decisions made in meetings comes from 20% of the meeting time; 80% of your success comes from 20% of your efforts; 80% of complaints you make are from 20% of your services; etc.

Read more: https://www.investopedia.com/terms/p/pareto-analysis.asp#ixzz5SU1aEH9H

Charlie Chong/ Fion Zhang


Part VA

Cause-and-Effect Diagrams The cause-and-effect diagram (C-E diagram) is a visual method for analyzing causal factors for a given effect in order to determine their relationship. The C-E diagram, one of the most widely used quality tools, is also called an Ishikawa diagram (after its inventor) or a fishbone diagram (because of its shape). Basic characteristics of the C-E diagram include the following:  It represents the factors that might contribute to an observed condition or effect  It clearly shows interrelationships among possible causal factors  The interrelationships shown are usually based on known data C-E diagrams are an effective way to generate and organize the causes of observed events or conditions since they display causal information in a structured way. C-E diagrams consist of a description of the effect written in the head of the fish and the causes of the effect identified in the major bones of the body. These main branches typically include four or more of the following six influences but may be specifically tailored as needed: 1. People (worker) 2. Equipment (machine) 3. Method 4. Material 5. Environment 6. Measurement Figure 18.2 is a C-E diagram that identifies all the program elements that should be in place to prevent design output errors.

Charlie Chong/ Fion Zhang


Part VA

Cause-and-Effect Diagrams

Charlie Chong/ Fion Zhang


Figure 18.2 Cause-and-effect diagram.

Part VA

These main branches typically include four or more of the following six influences but may be specifically tailored as needed: 1. People (worker) 2. Equipment (machine) 3. Method 4. Material 5. Environment 6. Measurement

Charlie Chong/ Fion Zhang


Figure 18.2 Cause-and-effect diagram.

Part VA

These main branches typically include four or more of the following six influences but may be specifically tailored as needed: 1. People (worker) 2. Equipment (machine) 3. Method 4. Material 5. Environment 6. Measurement

Charlie Chong/ Fion Zhang


Part VA These main branches typically include four or more of the following six influences but may be specifically tailored as needed: 1. People (worker) 2. Equipment (machine) 3. Method 4. Material 5. Environment

6.

Measurement

Charlie Chong/ Fion Zhang


Part VA

Fishbone Diagram Background Fishbone Diagrams (also known as Ishikawa Diagrams) are can be used to answer the following questions that commonly arise in problem solving: What are the potential root causes of a problem? What category of process inputs represents the greatest source of variability in the process output? Dr. Kaoru Ishikawa developed the "Fishbone Diagram" at the University of Tokyo in 1943. Hence the Fishbone Diagram is frequently referred to as an "Ishikawa Diagram". Another name for this diagram is the "Cause & Effect" or CE diagram. As illustrated below, a completed Fishbone diagram includes a central "spine" and several branches reminiscent of a fish skeleton. This diagram is used in process improvement methods to identify all of the contributing root causes likely to be causing a problem. The Fishbone chart is an initial step in the screening process. After identifying potential root cause(s), further testing will be necessary to confirm the true root cause(s). This methodology can be used on any type of problem, and can be tailored by the user to fit the circumstances. Using the Ishikawa approach to identifying the root cause(s) of a problem provides several benefits to process improvement teams:  Constructing a Fishbone Diagram is straightforward and easy to learn.  The Fishbone Diagram can incorporate metrics but is primarily a visual tool for organizing critical thinking.  By Involving the workforce in problem resolution the preparation of the fishbone diagram provides an education to the whole team.  Using the Ishikawa method to explore root causes and record them helps organize the discussion to stay focused on the current issues.  It promotes "System Thinking" through visual linkages.  It also helps prioritize further analysis and corrective actions.

https://www.moresteam.com/toolbox/fishbone-diagram.cfm

Charlie Chong/ Fion Zhang


Part VA

How to Get Started This tool is most effective when used in a team or group setting. 1. To create a Fishbone Diagram, you can use any of a variety of materials. In a group setting you can use a white board, butcher-block paper, or a flip chart to get started. You may also want to use "Post-It" notes to list possible causes but have the ability to re-arrange the notes as the diagram develops. 2. Write the problem to be solved (the EFFECT) as descriptively as possible on one side of the work space, then draw the "backbone of the fish", as shown below. The example we have chosen to illustrate is "Missed Free Throws" (an acquaintance of ours just lost an outdoor three-on-three basketball tournament due to missed free throws).

3.

The next step is to decide how to categorize the causes. There are two basic methods: A) by function, or B) by process sequence. The most frequent approach is to categorize by function. In manufacturing settings the categories are often: Machine, Method, Materials, Measurement, People, and Environment. In service settings, Machine and Method are often replaced by Policies (high level decision rules), and Procedures (specific tasks). In this case, we will use the manufacturing functions as a starting point, less Measurement because there was no variability experienced from measurements (its easy to see if the ball goes through the basket).

https://www.moresteam.com/toolbox/fishbone-diagram.cfm

Charlie Chong/ Fion Zhang


Part VA 4.

You can see that this is not enough detail to identify specific root causes. There are usually many contributors to a problem, so an effective Fishbone Diagram will have many potential causes listed in categories and sub-categories. The detailed sub-categories can be generated from either or both of two sources: 1. Brainstorming by group/team members based on prior experiences. 2. Data collected from check sheets or other sources. A closely related Cause & Effect analytical tool is the "5-Why" approach, which states: "Discovery of the true root cause requires answering the question 'Why?' at least 5 times". See the 5-Why feature of the Toolbox. Additional root causes are added to the fishbone diagram below:

https://www.moresteam.com/toolbox/fishbone-diagram.cfm

Charlie Chong/ Fion Zhang


Part VA

5.

5.

6.

The usefulness of a Fishbone Diagram is dependent upon the level of development - moving past symptoms to the true root cause, and quantifying the relationship between the Primary Root Causes and the Effect. You can take the analysis to a deeper level by using Regression Analysis to quantify correlation, and Designed Experiments to quantify causation. As you identify the primary contributors, and hopefully quantify correlation, add that information to your chart, either directly or with foot notes.

Cont.‌.The following chart has the top five primary root cause contributors highlighted in gold. The note "MC" (for Mathematical Correlation) attached to air pressure indicates that strong correlation has been established through statistical analysis of data (the lower the air pressure, the less bounce off the rim). If you have ever tried to shoot baskets at a street fair or carnival to win a prize, you know that the operator always over-inflates the ball to lower your chances. Pick any system that works for you - you could circle instead of highlighting. The priority numbers can carry over to a corrective action matrix to help organize and track improvement actions. The tutorial provided below that shows how to make and use a Fishbone Diagram using EngineRoom.

https://www.moresteam.com/toolbox/fishbone-diagram.cfm

Charlie Chong/ Fion Zhang


Part VA

Categorize The Causes by Functions Manufacturing

Services

People

People

Machine

Policy

Methods

Procedures

Materials

Materials

Measurement

Measurement

Environment

Environment

https://www.moresteam.com/toolbox/fishbone-diagram.cfm

Charlie Chong/ Fion Zhang


Part VA

Flowcharts and Process Mapping Process maps and flowcharts are used to depict the steps or activities in a process or system that produces some output. Flowcharts are specific tools for depicting sequential activities and typically use standard symbols in their creation. Flowcharts and process maps are effective means for understanding procedures and overall processes and are used by auditees to help define how work is performed. Flowcharts are especially helpful in understanding processes that are complicated or that appear to be in a state of disorder. Auditors may also use flowcharts to help understand both production and service processes during audit preparation. A flowchart may be used to describe an existing system or process or to design a new one. It can be used to:      

Develop a common understanding of an overall process, system, and sequence of operations Identify inspection and checkpoints that result in a decision Identify personnel (by job title) performing specific steps Identify potential problem areas, bottlenecks, unnecessary steps or loops, and rework loops Discover opportunities for changes and improvements Guide activities for identifying problems, theorizing about root causes, developing potential corrective actions and solutions, and achieving continuous improvement

Flowcharting usually follows a sequence from top to bottom and left to right, with arrowheads used to indicate the direction of the activity sequence. Common symbols often used for quality applications are shown in Figure 18.3. However, there are many other types of symbols used in flowcharting, such as ANSI Y15.3, Operation and Flow Process Charts (see Figures 18.4–18.8). Templates and computer software, both of which are easy to use and fairly inexpensive, are available for making flowcharts.

Charlie Chong/ Fion Zhang


Part VA

The implementation of a process- based QMS (such as ISO 9001:2008) and the use of process auditing techniques have made charting an important auditing tool. In the book How to Audit the Process- Based QMS, the authors state, ―Many auditors find it useful to draw a flowchart of the operations about to be audited. What processes are performed and what are the linkages? This also helps to define the interfaces where information and other resources come into and flow out of the audited area.‖ They continue by stating, ―To make maximum use of the process approach to auditing, the work papers should reflect the flow of activities to be audited.‖ In the book The Process Auditing Techniques Guide, the author explains, ―The primary tool of process auditing is creating a process flow diagram [PFD] or flowchart. Charting the process steps [sequential activities] is an effective method for describing the process. For auditing purposes, process flow diagrams should be used to identify sequential process steps [activities] and kept as simple or as reasonable as possible.‖ Another variation of a flowchart is a process map. Process maps are very good tools that show inputs, outputs, and area or department responsibilities along a timeline. The complexity of process maps can vary, but for auditing, simplicity is the key.

Charlie Chong/ Fion Zhang


Part VA

Process maps are very good tools that show inputs, outputs, and area or department responsibilities along a timeline. The complexity of process maps can vary, but for auditing, simplicity is the key.

Charlie Chong/ Fion Zhang


Part VA

Process maps are very good tools that show inputs, outputs, and area or department responsibilities along a timeline. The complexity of process maps can vary, but for auditing, simplicity is the key.

Charlie Chong/ Fion Zhang


Part VA

Process Flow Diagram PFD.

Charlie Chong/ Fion Zhang


Part VA Charlie Chong/ Fion Zhang


Figure 18.3 Common flowchart symbols.

Part VA Charlie Chong/ Fion Zhang


Figure 18.4 Activity sequence flowchart.

Part VA Charlie Chong/ Fion Zhang


Part VA

Figure 18.5 Top-down flowchart. A top-down diagram shows the breakdown of a system to its lowest manageable levels. They are used in structured programming to arrange program modules into a tree. Each module is represented by a box, which contains the module's name. The tree structure visualizes the relationships between modules.

https://www.edrawsoft.com/topdowndiagram.php

Charlie Chong/ Fion Zhang


Part VA

Figure 18.5 Top-down flowchart. A top-down diagram shows the breakdown of a system to its lowest manageable levels. They are used in structured programming to arrange program modules into a tree. Each module is represented by a box, which contains the module's name. The tree structure visualizes the relationships between modules.

https://www.edrawsoft.com/topdowndiagram.php

Charlie Chong/ Fion Zhang


Part VA

Figure 18.6 Matrix flowchart. Deployment or Matrix Flowchart- A deployment flowchart maps out the process in terms of who is doing the steps. It is in the form of a matrix, showing the various participants and the flow of steps among these participants. It is chiefly useful in identifying who is providing inputs or services to whom, as well as areas where different people may be needlessly doing the same task. See the Deployment of Matrix Flowchart.

https://www.edrawsoft.com/Flowchart-Definition.php

Charlie Chong/ Fion Zhang


Part VA

Figure 18.7 Flow process worksheet. PFD Worksheets (Process Flow Diagrams) In RCM++, there are two kinds of process flow diagrams (PFDs): a graphical process flow diagram, which is a high level chart of a process; and a PFD worksheet, which integrates the chart into a worksheet that records more detailed information about what the product goes through in each step of the manufacturing or assembly process. This includes the processing of individual components, transportation of materials, storage, etc. Also recorded are descriptions of the process and product characteristics that are affected in each step of the process, how these characteristics are controlled and what needs to be achieved at each step. For example, a process characteristic may be the temperature range for wax that will be sprayed onto the finished vehicle and a product characteristic may be the required wax thickness.

Charlie Chong/ Fion Zhang


Flow process worksheet.

Part VA Charlie Chong/ Fion Zhang


Part VA

Figure 18.8 A process map. Process mapping is used to visually demonstrate all the steps and decisions in a particular process. A process map or flowchart describes the flow of materials and information, displays the tasks associated with a process, shows the decisions that need to be made along the chain and shows the essential relationships between the process steps.

https://www.lucidchart.com/pages/process-mapping/how-to-make-a-process-map

Charlie Chong/ Fion Zhang


Part VA

Figure 18.8 A process map. Process mapping is used to visually demonstrate all the steps and decisions in a particular process. A process map or flowchart describes the flow of materials and information, displays the tasks associated with a process, shows the decisions that need to be made along the chain and shows the essential relationships between the process steps.

Charlie Chong/ Fion Zhang


Part VA

Statistical Process Control (SPC) Charts Many companies use statistical process control (SPC) techniques as part of a continuing improvement effort. Auditors need to be knowledgeable about the methods and application of control charts in order to determine the adequacy of their use and evaluate the results achieved. Auditors need this knowledge for observation purposes, but they are not required to plot control charts as part of the audit process. Control charts, also called process control charts or run charts, are tools used in SPC. SPC recognizes that some random variation always exists in a process and that the goal is to control distribution rather than individual dimensions. Operators and quality control technicians use SPC to determine when to adjust a process and when to leave it alone. The ability to operate to a tight tolerance without producing defects can be a major business advantage. Control charts can tell an organization when a process is good enough so that resources can be directed to more pressing needs. A control chart, such as the one shown in Figure 18.9, is used to distinguish variations in a process over time. Variations can be attributed to either special or common causes. ď Ž Common-cause variations repeat randomly within predictable limits and can include chance causes, random causes, system causes, and inherent causes. ď Ž Special-cause variations indicate that some factors affecting the process need to be identified, investigated, and brought under control. Such causes include assignable causes, local causes, and specific causes. Control charts use operating data to establish limits within which future observations are expected to remain if the process remains unaffected by special causes.

Charlie Chong/ Fion Zhang


Figure 18.9 Control chart.

Part VA Charlie Chong/ Fion Zhang


Part VA

Statistical Process Control (SPC) Charts Variations can be attributed to either special or common causes. ď Ž Common-cause variations repeat randomly within predictable limits and can include - chance causes, - random causes, - system causes, and - inherent causes. ď Ž Special-cause variations indicate that some factors affecting the process need to be identified, investigated, and brought under control. Such causes include - assignable causes, - local causes, and - specific causes.

Charlie Chong/ Fion Zhang


Part VA

Control charts can monitor the aim and variability, and thereby continually check the stability of a process. This check of stability in turn ensures that the statistical distribution of the product characteristic is consistent with quality requirements. Control charts are commonly used to: 1. Attain a state of statistical control 2. Monitor a process 3. Determine process capability The type of control chart used in a specific situation depends on the type of data being measured or counted.

Charlie Chong/ Fion Zhang


Part VA

Variable Data Variable data, also called continuous data or measurement data, are collected from measurements of the items being evaluated. For example, the measurement of physical characteristics such as time, length, weight, pressure, or volume through inspection, testing, or measuring equipment constitutes variable data collection. Variable data can be measured and plotted on a continuous scale and are often expressed as fractions or decimals. The X (average) chart and the R (range) chart are the most common types of control charts for variable data. The X chart illustrates the average measurement of samples taken over time. The R chart illustrates the range of the measurements of the samples taken. For these charts to be accurate, it is critical that individual items composing the sample are pulled from the same basic production process. That is, the samples should be drawn around the same time, from the same machine, from the same raw material source, and so on.8 These charts are often used in conjunction with one another to jointly record the mean and range of samples taken from the process at fairly regular intervals. Figure 18.10 shows an X and R chart.

Charlie Chong/ Fion Zhang


Part VA

Figure 18.10 X and R chart example.

http://asq.org/learn-about-quality/tools-templates.html

Charlie Chong/ Fion Zhang


Part VA

đ??— and R chart In statistical quality control, the X and R chart is a type of control chart used to monitor variables data when samples are collected at regular intervals from a business or industrial process. The chart is advantageous in the following situations: ď Ž The sample size is relatively small (say, n ≤ 10, X and s charts are typically used for larger sample sizes) ď Ž The sample size is constant ď Ž Humans must perform the calculations for the chart The "chart" actually consists of a pair of charts: One to monitor the process standard deviation (as approximated by the sample moving range) and another to monitor the process mean, as is done with the X and s and individuals control charts. The X and R chart plots the mean value for the quality characteristic across all units in the sample, Xđ?‘– , plus the range of the quality characteristic across all units in the sample as follows: R = x max – x min. The normal distribution is the basis for the charts and requires the following assumptions: ď Ž The quality characteristic to be monitored is adequately modeled by a normally distributed random variable; ď Ž The parameters Îź (mu- mean or expectation of the distribution) and Ďƒ (sigma- the standard deviation) for the random variable are the same for each unit and each unit is independent of its predecessors or successors; ď Ž The inspection procedure is same for each sample and is carried out consistently from sample to sample. As with X and s and individuals control charts, the X chart is only valid if the within-sample variability is constant. Thus, the R chart is examined before the X chart; if the R chart indicates the sample variability is in statistical control, then the X is examined to determine if the sample mean is also in statistical control. If on the other hand, the sample variability is not in statistical control, then the entire process is judged to be not in statistical control regardless of what the X chart indicates.

https://en.wikipedia.org/wiki/X%CC%85_and_R_chart

Charlie Chong/ Fion Zhang


Part VA

R chart Centre Line: R=

m i=1 max

xij −min(xij) m

UCL = D4R

LCL = D3R Plot Statistic, Ri = max(xij) – min(xij)

https://en.wikipedia.org/wiki/X%CC%85_and_R_chart

Charlie Chong/ Fion Zhang


Part VA

đ?’™ chart Centre Line: đ?‘Ľ=

đ?‘š đ?‘–=1

đ?‘› đ?‘—=1 đ?‘Ľđ?‘–đ?‘—

đ?‘šđ?‘›

UCL/LCL = đ?‘Ľ Âą A2R

Plot Statistic, đ?‘Ľđ?‘– =

đ?‘› đ?‘—=1 đ?‘Ľđ?‘–đ?‘—

đ?‘›

https://en.wikipedia.org/wiki/X%CC%85_and_R_chart

Charlie Chong/ Fion Zhang


Part VA

Plotting đ??— and R chart An X -Bar and R-Chart is a type of statistical process control chart for use with continuous data collected in subgroups at set time intervals - usually between 3 to 5 pieces per subgroup. The Mean (X -Bar) of each subgroup is charted on the top graph and the Range (R) of the subgroup is charted on the bottom graph. Out of Control points or patterns can occur on either the X -bar or R chart. Like all control charts, an X -Bar and R-Chart is used to answer the following questions: ď Ž Is the process stable over time? ď Ž What is the effect of a process change on the output characteristics? ď Ž How will I know if the process becomes unstable, or the performance changes over time? When is it used? ď Ž Constructed throughout the DMAIC (Define, Measure, Analyze, Improve and Control) process, particularly in the Measure, Analyze and Control phases of the cycle. ď Ž Used to understand process behavior, evaluate different treatments or methods, and to control a process. ď Ž Recommended for subgroup sizes of 10 or less. If the subgroup size exceeds 10, the range chart is replaced by a chart of the subgroup standard deviation, or S chart. NOTE: It has been estimated that 98% of all processes can be effectively represented by using either the XmR charts or X-Bar & R charts.

https://www.moresteam.com/university/workbook/wb_spcxbarandrintro.pdf

Charlie Chong/ Fion Zhang


Part VA

How to Construct an X-Bar and R Control Chart To construct an X -Bar and R Chart, follow the process steps below. For subgroup sizes greater than 10, substitute the subgroup standard deviation (S) for range (R), and use constants for S from the table located after the instructional steps. 1. Record subgroup observations. 2. Calculate the average (X -Bar) and range (R) for each subgroup. đ?‘Ľđ?‘– =

đ?‘› đ?‘—=1 đ?‘Ľđ?‘–đ?‘—

đ?‘›

, Ri = max(xij) – min(xij)

3. Calculate the average R value, or R-bar, and plot this value as the centerline on the R chart; R=

m i=1 max

xij −min(xij) m

45. Calculate the average đ?‘Ľ value, or đ?‘Ľ -bar, and plot this value as the centerline on the đ?‘Ľ chart; đ?‘Ľ=

đ?‘š đ?‘–=1

đ?‘› đ?‘—=1 đ?‘Ľđ?‘–đ?‘—

đ?‘šđ?‘›

Where: m is the number of subgroup, n is the size of subgroup (sampling size)

5. Plot the đ?‘Ľđ?‘– and Ri values for each subgroup in time series. You can create a meaningful control chart from as few as 6-7 data points, although a larger sample size (20+ subgroups) will provide much more reliability. In most cases, control limits are not calculated until at least 20 subgroups of data are collected.

https://www.moresteam.com/university/workbook/wb_spcxbarandrintro.pdf

Charlie Chong/ Fion Zhang


Part VA

5. Based on the subgroup size, select the appropriate constant, called D4, and multiply by R-bar (R) to determine the Upper Control Limit for the Range Chart. All constants are available from the reference table. If the subgroup size is between 7 and 10, select the appropriate constant, called D3, and multiply by R-bar to determine the Lower Control Limit for the Range Chart. There is no Lower Control Limit for the Range Chart if the subgroup size is 6 or less.

UCL (R) LCL (R)

= R x D4 Plot the Upper Control Limit on the R chart. = R x D3 Plot the Lower Control Limit on the R chart.

6. Calculate the X-bar Chart Upper Control Limit, or upper natural process limit, by multiplying R-bar by the appropriate A2 factor (based on subgroup size) and adding that value to the average (X -bar-bar). Calculate the X-bar Chart Lower Control Limit, or lower natural process limit, for the X -bar chart by multiplying R-bar by the appropriate A2 factor (based on subgroup size) and subtracting that value from the average (X -barbar). UCL (đ?‘Ľ -bar) = đ?‘Ľ + (A2 x R) LCL (đ?‘Ľ -bar) = đ?‘Ľ - (A2 x R) Plot the UCL/LCL on the đ?‘Ľ -bar chart. 10.After constructing the control chart, follow the same rules to assess stability that are used on mR charts. Make sure to evaluate the stability of the Range Chart before drawing any conclusions about the Averages (đ?‘Ľ - Bar) Chart --- if the Range Chart is out of control, the control limits on the Averages Chart will be unreliable.

https://www.moresteam.com/university/workbook/wb_spcxbarandrintro.pdf

Charlie Chong/ Fion Zhang


Part VA

A Few Notes about the X-bar & R-Charts False alarm in X-bar chart:  The Type I error in control chart is called False alarm.  When a control chart declares a process not-in-control when in fact it is in-control, it is a false alarm.  The Shewhart charts with 3-sigma limits have a false alarm probability of 0.0027 in any one sample.  That is, approximately 3 out of 1000 samples could cause false alarm.

https://courses.edx.org/asset-v1:TUMx+QEMx+2T2015+type@asset+block@8-2-2_X-bar_and_R-Charts.pdf

Charlie Chong/ Fion Zhang


Control Chart Constants

Part VA https://www.moresteam.com/university/workbook/wb_spcxbarandrintro.pdf

Charlie Chong/ Fion Zhang


Part VA

XR Chart Constants Subgroup Size 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Tabular values for X-bar and range charts A2 d2 D3 1.880 1.128 ----1.023 1.693 ----0.729 2.059 ----0.577 2.326 ----0.483 2.534 ----0.419 2.704 0.076 0.373 2.847 0.136 0.337 2.970 0.184 0.308 3.078 0.223 0.285 3.173 0.256 0.266 3.258 0.283 0.249 3.336 0.307 0.235 3.407 0.328 0.223 3.472 0.347 0.212 3.532 0.363 0.203 3.588 0.378 0.194 3.640 0.391 0.187 3.689 0.403 0.180 3.735 0.415

https://www.pqsystems.com/qualityadvisor/formulas/x_bar_range_f.php

D4 3.268 2.574 2.282 2.114 2.004 1.924 1.864 1.816 1.777 1.744 1.717 1.693 1.672 1.653 1.637 1.622 1.608 1.597 1.585

Charlie Chong/ Fion Zhang


Part VA

Exercise: Following is a table of data sampled twelve times from a process; the process mean is supposed to be 74.000 inches. Plot these data in an X-bar R chart to determine if the process is in statistical control (note: n = 6).

https://www.moresteam.com/university/workbook/wb_spcxbarandrintro.pdf

Charlie Chong/ Fion Zhang


Part VA

UCL/LCL Range: UCL (R) LCL (R)

= R x D4 Plot the Upper Control Limit on the R chart. (0.026 x 2.004 = 0.0521) = R x D3 Plot the Lower Control Limit on the R chart. (for n=6 no LCL)

Plot the UCL/LCL on the đ?‘… -bar chart.

Mean: UCL (đ?‘Ľ -bar) = đ?‘Ľ + (A2 x R) (74.001+ (0.483x0.026) = 74.013558) LCL (đ?‘Ľ -bar) = đ?‘Ľ - (A2 x R) (74.001 - (0.483x0.026) = 73.988442) Plot the UCL/LCL on the đ?‘Ľ -bar chart.

https://www.moresteam.com/university/workbook/wb_spcxbarandrintro.pdf

Charlie Chong/ Fion Zhang


Part VA

Western Electric Rules ―Western Electric‖* rules to increase the sensitivity of the X-bar chart are used in addition to the rule that any one point outside of the 3-sigma limit will indicate an out-of-control situation: 1. Two of three consecutive plots fall outside of a 2-sigma warning limit on the same side of the center line. 2. Four of five consecutive plots fall outside of a 1-sigma warning limit on the same side. 3. More than seven consecutive plots fall above or below the centerline. 4. More than seven consecutive plots are in a run-up or a run-down.

https://www.qimacros.com/control-chart/stability-analysis-control-chart-rules/

Charlie Chong/ Fion Zhang


Part VA

Nelson Rules Nelson rules are a method in process control of determining if some measured variable is out of control (unpredictable versus consistent). Rules, for detecting "out-of-control" or non-random conditions were first postulated by Walter A. Shewhart [1] in the 1920s. The Nelson rules were first published in the October 1984 issue of the Journal of Quality Technology

https://en.wikipedia.org/wiki/Nelson_rules

Charlie Chong/ Fion Zhang


Part VA

Attribute data Attribute data, also referred to as discrete data or counted data, provide information on number and frequency of occurrences. By counting and plotting discrete events—the number of defects or percentage of failures, for example- in integer values (1, 2, 3), an auditor is able to look at previously defined criteria and rate the product or system as pass/fail, acceptable/unacceptable, or go/no-go. Several basic types of control charts can be used for charting attribute data. Attribute data can be either a fraction nonconforming or number of defects or nonconformities observed in the sample. To chart the fraction of units defective, the p chart is used. The units are classified into one of two states: go/no-go, acceptable/unacceptable, conforming/nonconforming, yes/no, and so on. The sample size may be fixed or variable, which makes the technique very effective for statistically monitoring nontraditional processes such as percentage of on- time delivery. However, if the sample size is variable, control limits must be calculated for each sample taken. The np chart uses the number of nonconforming units in a sample. This chart is sometimes easier for personnel who are not trained in SPC. It is easier to understand this chart when the sample size is constant, but it can be variable like the p chart. The c chart plots the number of nonconformities per some unit of measure. For example, the total number of nonconformities could be counted at final inspection of a product and charted on a c chart. The number of nonconformities may be made up of several distinct defects, which might then be analyzed for improvement of the process. For this chart, the sample size must be constant from unit to unit. The u chart is used for the average number of nonconformities per some unit of measure. Sample size can be either variable or constant since it is charting an average. A classic example is the number of nonconformities in a square yard of fabric in the textile industry. Bolts of cloth may vary in size, but an average can be calculated. Figure 18.11 is an example of plotting attribute data using a u chart.

Charlie Chong/ Fion Zhang


Figure 18.11 u chart for the average errors per truck for 20 days of production.

Part VA Charlie Chong/ Fion Zhang


Part VA

p-chart What is it? A p-chart is an attributes control chart used with data collected in subgroups of varying sizes. Because the subgroup size can vary, it shows a proportion on nonconforming items rather than the actual count. P-charts show how the process changes over time. The process attribute (or characteristic) is always described in a yes/no, pass/fail, go/no go form. For example, use a p-chart to plot the proportion of incomplete insurance claim forms received weekly. The subgroup would vary, depending on the total number of claims each week. P-charts are used to determine if the process is stable and predictable, as well as to monitor the effects of process improvement theories. What does it look like? The p-chart shows the proportion of nonconforming units in subgroups of varying sizes.

Charlie Chong/ Fion Zhang


Part VA

p-Chart In statistical quality control, the p-chart is a type of control chart used to monitor the proportion of nonconforming units in a sample, where the sample proportion nonconforming is defined as the ratio of the number of nonconforming units to the sample size, n. The p-chart only accommodates "pass"/"fail"-type inspection as determined by one or more go-no go gauges or tests, effectively applying the specifications to the data before they are plotted on the chart. Other types of control charts display the magnitude of the quality characteristic under study, making troubleshooting possible directly from those charts. Assumptions The binomial distribution is the basis for the p-chart and requires the following assumptions:  The probability of nonconformity p is the same for each unit;  Each unit is independent of its predecessors or successors;  The inspection procedure is same for each sample and is carried out consistently from sample to sample The control limits for this chart type are p ±3

p(1−p) n

where p is the estimate of the long-term process mean established during control-chart setup. Naturally, if the lower control limit is less than or equal to zero, process observations only need be plotted against the upper control limit. Note that observations of proportion nonconforming below a positive lower control limit are cause for concern as they are more frequently evidence of improperly calibrated test and inspection equipment or inadequately trained inspectors than of sustained quality improvement. Some organizations may elect to provide a standard value for p, effectively making it a target value for the proportion nonconforming. This may be useful when simple process adjustments can consistently move the process mean, but in general, this makes it more challenging to judge whether a process is fully out of control or merely off-target (but otherwise in control).

https://en.wikipedia.org/wiki/P-chart

Charlie Chong/ Fion Zhang


Part VA

Potential pitfalls There are two circumstances that merit special attention:  Ensuring enough observations are taken for each sample  Accounting for differences in the number of observations from sample to sample Adequate sample size Sampling requires some careful consideration. If the organization elects to use 100% inspection on a process, the production rate determines an appropriate sampling rate which in turn determines the sample size. If the organization elects to only inspect a fraction of units produced, the sample size should be chosen large enough so that the chance of finding at least one nonconforming unit in a sample is high—otherwise the false alarm rate is too high. One technique is to fix sample size so that there is a 50% chance of detecting a process shift of a given amount (for example, from 1% defective to 5% defective). If δ is the size of the shift to detect, then the 3 sample size should be set to n ≥ ( )2 p(1- p) δ

Another technique is to choose the sample size large enough so that the p-chart has a positive lower control 32(1−p) limit or n ≥ p

https://en.wikipedia.org/wiki/P-chart

Charlie Chong/ Fion Zhang


Part VA

Sensitivity of control limits Some practitioners have pointed out that the p-chart is sensitive to the underlying assumptions, using control limits derived from the binomial distribution rather than from the observed sample variance. Due to this sensitivity to the underlying assumptions, p-charts are often implemented incorrectly, with control limits that are either too wide or too narrow, leading to incorrect decisions regarding process stability[3]. A p-chart is a form of the Individuals chart (also referred to as "XmR" or "ImR"), and these practitioners recommend the individuals chart as a more robust alternative for count-based data. Meaning: XMR- The XmR chart is actually two charts. The X is the data point being measured and mR the Moving Range which is the difference between consecutive data point measurements. IMR?

p Âą3

p(1−p) n

https://en.wikipedia.org/wiki/P-chart

Charlie Chong/ Fion Zhang


Part VA

np-Chart In statistical quality control, the np-chart is a type of control chart used to monitor the number of nonconforming units in a sample. It is an adaptation of the p-chart and used in situations where personnel find it easier to interpret process performance in terms of concrete numbers of units rather than the somewhat more abstract proportion.[1] The np-chart differs from the p-chart in only the three following aspects:

ď Ž The control limits are np Âą3 đ?‘›p(1 − p) , where n is the sample size and p is the estimate of the long-term process mean established during controlchart setup. ď Ž The number nonconforming (np), rather than the fraction nonconforming (p), is plotted against the control limits.

ď Ž The sample size, n, is constant. np Âą3 đ?‘›p(1 − p)

https://en.wikipedia.org/wiki/Np-chart

Charlie Chong/ Fion Zhang


Part VA

SPC Chart Interpretations An SPC chart is essentially a set of statistical control limits applied to a set of sequential data from samples chosen from a process. The data composing each of the plotted points are a location statistic such as an individual, an average, a median, a proportion, and so on. If the control chart is one to monitor variable data, then an additional associated chart for the process variation statistic can be utilized. Examples of variation statistics are: ď Ž the range, ď Ž standard deviation, and ď Ž moving range. Statistically Rare Patterns By their design, control charts utilize unique and statistically rare patterns that can be associated with process changes. These relatively rare or unnatural patterns are usually assumed to be caused by disturbances or influences that interfere with the ordinary behavior of the process. These causes that disturb or alter the output of a process are called assignable causes. (local/specific cause) They can be caused by: 1. Equipment 2. Personnel 3. Materials

Charlie Chong/ Fion Zhang


Part VA

Statistically Rare Patterns 道可道非常道

Charlie Chong/ Fion Zhang


Part VA

Statistically Rare Patterns 道可道非常道

Charlie Chong/ Fion Zhang


Part VA

松下问童子 言师采药去 只在此山中 云深不知处

Charlie Chong/ Fion Zhang


Part VA

If the process is out of control, the process engineer looks for an assignable cause by following the out- ofcontrol action plan (OCAP) associated with the control chart. Out of control refers to rejecting the assumption that the current data are from the same population as the data used to create the initial control chart limits.11 For classical Shewhart charts, a set of rules called the Western Electric Rules (WECO Rules) and a set of trend rules often are used to determine out of control (see Figure 18.12). The WECO rules are based on probability. We know that, for a normal distribution, the probability of encountering a point outside ± 3σ is 0.3%. This is a rare event. Therefore, if we observe a point outside the control limits, we conclude the process has shifted and is unstable. Similarly, we can identify other events that are equally rare and use them as flags for instability. The probability of observing two points out of three in a row between 2σ and 3σ and the probability of observing four points out of five in a row between 1σ and 2σ are also about 0.3%. Figure 18.13 is an example of any point above +3 sigma in Figure 18.12. Figure 18.14 is an example of eight consecutive points on this side of the control line in Figure 18.12. While the WECO rules increase a Shewhart chart‘s sensitivity to trends or drifts in the mean, there is a severe downside to adding the WECO rules to an ordinary Shewhart control chart that the user should understand. When following the standard Shewhart ―out-of-control‖ rule (i.e., signal if and only if you see a point beyond the ± 3σ control limits) you will have ―false alarms‖ every 371 points on the average. Adding the WECO rules increases the frequency of false alarms to about once in every 91.75 points, on the average.12 The user has to decide whether this price is worth paying (some users add the WECO rules, but take them ―less seriously‖ in terms of the effort put into troubleshooting activities when out-of-control signals occur). Figure 18.15 is an example of four out of the last five points above +1 sigma in Figure 18.12.

Charlie Chong/ Fion Zhang


Figure 18.12 WECO rules for signaling ―out of control.‖

Part VA

Rule-1 Rule-2 Rule-3 Rule-4

Charlie Chong/ Fion Zhang


Part VA

WECO (Western Electric Company) rules for signaling “out of control.” Rule-2

Rule-1

Rule-3

https://www.pmi.co.uk/sectors/manufacturing/

Rule-4

Charlie Chong/ Fion Zhang


Figure 18.13 Any point above +3 sigma control limit (a point above 3 sigma, C line).

Part VA

Rule-1

Charlie Chong/ Fion Zhang


Figure 18.14 Consecutive points above the average (trend: 8 points in a row but within 3 sigma, C line).

Part VA Rule-4, 9 points above đ?‘ż

8 point above đ?‘‹ only. (Nelson rule valid)

Charlie Chong/ Fion Zhang


Figure 18.15 Four out of the last five points above +1 sigma.

Part VA

Rule-2 Rule-3

Charlie Chong/ Fion Zhang


Part VA

Nelson rules for signaling “out of control.� Nelson Rules are an expanded set of rules developed to cover increasingly rare conditions.

WECON Rule-4 9 (8) points above CL

14 points in Zone C

6 points ascending or descending

WECON Rule-4

https://www.qimacros.com/control-chart/stability-analysis-control-chart-rules/

Charlie Chong/ Fion Zhang


Part VA

Checklists, Check Sheets, Guidelines, and Log Sheets Four basic tools are used by auditors in the performance of audits, to ensure consistency and effectiveness of the audit. Although each may be used independently of one another, they may be used together to document audit evidence. Also see ―5. Auditing Tools and Working Papers‖ in Part II for more information about checklists, check sheets, guidelines, and log sheets.

 Checklists Checklists are the most common tools used to collect data during an audit. They provide an organized form for identifying information to be collected and a means for recording information once it is collected. In addition, the checklist serves as a tool to help guide the audit team during audit performance. A checklist usually contains a listing of required items where audit evidence is needed, places for recording acceptable responses, and places for taking notes. An auditor‘s checklist is either a list of questions to answer or statements to verify. Figures 18.16–18.18 provide samples of checklists.

Charlie Chong/ Fion Zhang


Figure 18.16 Sample checklist, ISO 9001, clause 8.2.2, Internal auditing.

Part VA

ISO 9001:2008 Checklist

Ref.

Question/Criteria

8.2.2

Internal auditing

8.2.2-1

Are internal audits conducted at planned intervals?

8.2.2-2

Are audits carried out to determine conformance of the QMS to planned arrangements, the organization‘s QMS requirements, this International Standard, and that the QMS has been effectively implemented and maintained?

8.2.2-3

Does the audit program plan consider status and importance of the activities and areas to be

8.2.2-4

Are audit criteria, scope, frequency, and methods defined?

8.2.2-5

Are auditors selected and audits conducted to ensure objectivity and impartiality of the audit process? Are auditors prevented from auditing their own work?

8.2.2-6

Are there documented procedures? Do the procedures cover responsibilities, requirements for planning and conducting, and recording and reporting results? [4.2.4]

8.2.2-7

Is action taken by management responsible for the area to eliminate nonconformities and their causes? Is this done without undue delay?

8.2.2-8

Are follow-up activities carried out to verify the implementation of the action? Are the verification results reported? [8.5.2]

Yes/No

Comments/Data collection plan

Charlie Chong/ Fion Zhang


Figure 18.17 Sample quality system checklist.

Part VA

A. Review of customer requirements 1. Is there a quality review of purchase orders to identify special or unusual requirements? 2. Are requirements for special controls, facilities, equipment, and skills preplanned to ensure they will be in place when needed? 3. Have exceptions to customer requirements been taken? 4. Are customer requirements available to personnel involved in the manufacture, control, and inspection of the product? 5. Are supplier and subtier sketches, drawings, and specifications compatible with the customer‘s requirements? B. Supplier control practices 6. Is there a system for identifying qualified sources, and is this system adhered to by the purchasing function? 7. Are initial audits of major suppliers conducted? 8. Does the system ensure that technical data (drawings, specifications, and so on) are included in purchase orders? 9. Is the number and frequency of inspections and tests adjusted based on supplier performance? C. Nonconforming material 10. Are nonconformances identified and documented? 11. Are nonconformances physically segregated from conforming material where practical? 12. Is further processing of nonconforming items restricted until an authorized disposition is received? 13. Do suppliers know how to handle nonconformances? 14. Are process capability studies used as a part of the nonconforming material control and process planning? D. Design and process change control routines 15. Are changes initiated by customers incorporated as specified? 16. Are internally initiated changes in processing reviewed to see if they require customer approval? 17. Is the introduction date of changes documented? 18. Is there a method of notifying subtier suppliers of applicable changes? E. Process and product audits 19. Are process audits conducted? 20. Are product audits used independent of normal product acceptance plans? 21. Do the audits cover all operations, shifts, and products? 22. Do audit results receive management review? 23. Is the audit frequency adjusted based on observed trends?

Charlie Chong/ Fion Zhang


Part VA

Figure 18.18 Calibration area checklist. Lab/Appraisal # _________________ Date: _________ Page 1 of ___ Results Reference

Criteria

NL-QAM

1. Is monitoring and data collection equipment calibrated?

NL-QAM

2. Is equipment calibration traceable to nationally recognized standards?

NL-QAP-5.1

3. Is equipment calibration performed using approved instructions?

NL-QAP-5.1

4. Are calibration records maintained for each piece of equipment?

NL-QAP-5.1

5. Is a use log maintained?

Sat

Un-sat

Comments

Charlie Chong/ Fion Zhang


Part VA

 Check sheets Of the many tools available to auditors, a check sheet is the simplest and easiest to construct and use because there is no set form or content. The user may structure the check sheet to meet the needs of the situation under review. Figure 18.19 shows an example of a check sheet used in an audit of QMS documentation. The advantage of using a check sheet in this manner is the ability to demonstrate the magnitude of the impact of the issues identified in relation to the total population of the documents in the system. Rather than focusing on a single document, the auditor can easily demonstrate the impact on the process control (or process documentation) to the auditee. However, additional information will need to be provided to the auditee from the auditor‘s notes, such as:  The document number, title, and so on  A description of each nonconforming item  The reason the item is nonconforming For this reason, check sheets are often used with a log sheet or checklist to record the details of the issues found during the audit. Using check sheets in this manner is advantageous for the auditor because the auditee may use this information to easily construct a Pareto chart for corrective action. By doing so, the corrective action team‘s efforts will not only be more focused, but also the team has baseline data from which improvement may be demonstrated.

Charlie Chong/ Fion Zhang


Figure 18.19 Check sheet for documentation.

Part VA

Documentation check sheet Type

Conforming

Nonconforming

Total

NC %

Procedures

///// ////

/

10

10

Records

///// ///// ///// ///// ///// ///// ///// ///// /

///// ////

50

18

Forms

///// ///// ///// ///// /////

///// //

32

21.9

Labels

///// ///// ///// ///// ///

//

25

8

Tags

///// ///// ///// ///// ///// //

///// /

33

18.2

Summary

125

25

150

16.7

Charlie Chong/ Fion Zhang


Part VA

 Guidelines Audit guidelines are used to help focus audit activities. Typically, these consist of written attribute statements that are used to evaluate products, processes, or systems. Audit guidelines are usually not prepared by the auditor but rather by the auditor‘s organization, client, or by a regulatory authority. They are often used to ensure that specific items are evaluated during each audit when audit programs cover several locations, departments, or organizations. The primary differences between checklists and guidelines are that audit guideline items are usually written in statement form rather than as questions, and guidelines don‘t include provisions for recording audit results. To provide for the latter, log sheets are often used.

Keywords:  Typically, these consist of written attribute statements that are used to evaluate products, processes, or systems.  Audit guidelines are usually not prepared by the auditor but rather by the auditor‘s organization, client, or by a regulatory authority.

Charlie Chong/ Fion Zhang


Part VA

ď Ž Log Sheets Log sheets are simply blank columnar forms for recording information during an audit. Often, a log sheet is a simple ruled piece of paper on which the auditor records information reviewed (such as the procedures, records, processes, and so on) or evidence examined during the audit. Used in conjunction with audit guidelines, check sheets, or to augment ( make (something) greater by adding to it; increase.) checklists, they help ensure that objective evidence collected during an audit is properly recorded.

Charlie Chong/ Fion Zhang


Part VA

 Scatter Diagrams Scatter diagrams (correlation charts) identify the relationship between two variables. They can also be applied to identify the relationship of some variable with the potential root cause. Scatter diagrams plot relationships between two different variables;  independent variables on the x axis and  dependent variables on the y axis.

This tool can also be used by the auditor for analysis of audit observation results. Typical patterns for scatter diagram analysis (as shown in Figure 18.20) include positive correlation, negative correlation, curvilinear correlation, and no correlation.

Charlie Chong/ Fion Zhang


Figure 18.20 Data correlation patterns for scatter analysis.

Part VA Charlie Chong/ Fion Zhang


Part VA

Data correlation patterns for scatter analysis.

Charlie Chong/ Fion Zhang


Part VA

Data correlation patterns for scatter analysis.

Charlie Chong/ Fion Zhang


Part VA

Data correlation patterns for scatter analysis.

Charlie Chong/ Fion Zhang


Part VA Charlie Chong/ Fion Zhang


Part VA

Data correlation patterns for scatter analysis.

https://www.mymarketresearchmethods.com/types-of-charts-choose/

Charlie Chong/ Fion Zhang


Part VA

Data correlation patterns for scatter analysis. (online plot)

https://scatterplot.online/

Charlie Chong/ Fion Zhang


Part VA

Statistics Calculator: Linear Regression (online plot)

y = 2x - 4.7

http://www.alcula.com/calculators/statistics/linear-regression/

Charlie Chong/ Fion Zhang


Part VA

Correlation Test Online Calculator

https://www.answerminer.com/calculators/correlation-test

Charlie Chong/ Fion Zhang


Part VA

Correlation Test Online Calculator Pearson correlation coefficient In statistics, the Pearson correlation coefficient (PCC), also referred to as Pearson's r, the Pearson productmoment correlation coefficient (PPMCC) or the bivariate correlation,[1] is a measure of the linear correlation between two variables X and Y. Owing to the Cauchy–Schwarz inequality it has a value between +1 and −1, where 1 is total positive linear correlation, 0 is no linear correlation, and −1 is total negative linear correlation. It is widely used in the sciences. It was developed by Karl Pearson from a related idea introduced by Francis Galton in the 1880s.

Examples of scatter diagrams with different values of correlation coefficient (ρ) https://en.wikipedia.org/wiki/Pearson_correlation_coefficient

Charlie Chong/ Fion Zhang


Part VA

Correlation Test Online Calculator Pearson correlation coefficient Several sets of (x, y) points, with the correlation coefficient of x and y for each set. Note that the correlation reflects the non-linearity and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). N.B.: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of Y is zero.

https://en.wikipedia.org/wiki/Pearson_correlation_coefficient

Charlie Chong/ Fion Zhang


Part VA

Spearman's rank correlation coefficient In statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek ρ or as rs, is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function. The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other. Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables. Spearman's coefficient is appropriate for both continuous and discrete ordinal variables.[1][2] Both Spearman's ρ and Kendall's τ can be formulated as special cases of a more general correlation coefficient. https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient

A Spearman correlation of 1 results when the two variables being compared are monotonically related, even if their relationship is not linear. This means that all data-points with greater x-values than that of a given data-point will have greater y-values as well. In contrast, this does not give a perfect Pearson correlation.

https://mathcracker.com/spearman-correlation-calculator.php#results

Charlie Chong/ Fion Zhang


Part VA

Kendall rank correlation coefficient Overview The Kendall (1955) rank correlation coefficient evaluates the degree of similarity between two sets of ranks given to a same set of objects. This coefficient depends upon the number of inversions of pairs of objects which would be needed to transform one rank order into the other. In order to do so, each rank order is represented by the set of all pairs of objects (e.g., [a,b] and [b,a] are the two pairs representing the objects a and b), and a value of 1 or 0 is assigned to this pair when its order corresponds or does not correspond to the way these two objects were ordered. This coding schema provides a set of binary values which are then used to compute a Pearson correlation coefficient.

https://www.answerminer.com/calculators/correlation-test

Charlie Chong/ Fion Zhang


Part VA

ď Ž Histograms A histogram is a graphic summary of variation in a set of data. Histograms, such as the one shown in Figure 18.21, give a clearer and more complete picture of the data than would a table of numbers, since patterns may be difficult to discern in a table. Patterns of variation in data are called distributions. Often, identifiable patterns exist in the variation, and the correct interpretation of these patterns can help identify the cause of a problem. A histogram is one of the simplest tools for organizing and summarizing data. It is essentially a vertical bar chart of a frequency distribution that is used to show the number of times a given discrete piece of information occurs. The histogram‘s simplicity of construction and interpretation makes it an effective tool in the auditor‘s elementary analysis of collected data. Histograms should indicate sample size to communicate the degree of confidence in the conclusions. Once a histogram has been completed, it should be analyzed by identifying and classifying the pattern of variation, and developing a plausible and relevant explanation for the pattern. For a normal distribution, the following identifiable patterns, shown in Figure 18.22, are commonly observed in histograms.

Charlie Chong/ Fion Zhang


Figure 18.21 Histogram with normal distribution.

Part VA Charlie Chong/ Fion Zhang


Figure 18.22 Common histogram patterns.

Part VA Charlie Chong/ Fion Zhang


Part VA

a.

Bell-shaped: A symmetrical shape with a peak in the middle of the range of data. This is the normal and natural distribution of data. Deviations from the bell shape might indicate the presence of complicating factors or outside influences. While deviations from a bell shape should be investigated, such deviations are not necessarily bad.

b.

Double-peaked (bimodal): A distinct valley in the middle of the range of the data with peaks on either side. Usually a combination of two bell- shaped distributions, this pattern indicates that two distinct processes are causing this distribution.

c.

Plateau: A flat top with no distinct peak and slight tails on either side. This pattern is likely to be the result of many different bell shaped distributions with centers spread evenly throughout the range of data.

d.

Comb: High and low values alternating in a regular fashion. This pattern typically indicates measurement error, errors in the way data were grouped to construct the histogram, or a systematic bias in the way data were rounded off. A less likely alternative is that this is a type of plateau distribution.

Charlie Chong/ Fion Zhang


Skewed: An asymmetrical shape in which the peak is off- center in the range of data and the distribution tails off sharply on one side and gently on the other. If the long tail extends rightward, toward increasing values, the distribution is positively skewed; a negatively skewed distribution exists when the long tail extends leftward, toward decreasing values. The skewed pattern typically occurs when a practical limit, or a specification limit, exists on one side and is relatively close to the nominal value. In this case, there are not as many values available on the one side as on the other.

f.

Truncated: An asymmetrical shape in which the peak is at or near the edge of the range of the data, and the distribution ends very abruptly on one side and tails off gently on the other. Truncated distributions are often smooth bell- shaped distributions with a part of the distribution removed, or truncated, by some external force.

g.

Isolated-peaked: A small, separate group of data in addition to the larger distribution. This pattern is similar to the double- peaked distribution; however, the short bell shape indicates something that doesn‘t happen very often.

h.

Edge-peaked: A large peak is appended to an otherwise smooth distribution. It is similar to the comb distribution in that an error was probably made in the data. All readings past a certain point may have been grouped into one value.

Part VA

e.

Charlie Chong/ Fion Zhang


Figure 18.22 Common histogram patterns.

Part VA

Normal and natural distribution of data

Result of many different bell shaped distributions with centers spread evenly throughout the range of data.

This pattern indicates that two distinct processes are causing this distribution.

Typically indicates measurement error. Less likely many plateaus.

Pattern typically occurs when a practical limit, or a specification limit, exists on one side and is relatively close to the nominal value.

This pattern is similar the doublepeaked distribution; however, the short bell shape indicates something that doesn‘t happen very often.

Truncated distributions are often smooth bell- shaped distributions with a part of the distribution removed, or truncated, by some external force.

It is similar to the comb distribution in that an error was probably made in the data. All readings past a certain point may have been grouped into one value.

Charlie Chong/ Fion Zhang


No rules exist to explain pattern variation in every situation. The three most important characteristics are:

Part VA

 centering (central tendency),  width (spread, variation, scatter, dispersion), and  shape (pattern). If no discernible pattern appears to exist, the distribution may not be normal, and the data may actually be distributed according to some other distribution, such as exponential, gamma, or uniform. Analysis of distributions of these types is beyond the scope of this text, and further information should be sought from specialized statistics texts. All the topics in the remainder of this chapter are normally addressed by the auditee, but the auditor should have the knowledge to evaluate the auditee‘s improvement programs. Effective implementation and maintenance of improvement programs is critical to the ongoing success of the organization.

Charlie Chong/ Fion Zhang


Part VA

Gamma Distributions In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are three different parametrizations in common use:  With a shape parameter k and a scale parameter θ.  With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.  With a shape parameter k and a mean parameter μ = kθ = α/β. In each of these three forms, both parameters are positive real numbers. Probability density function

https://en.wikipedia.org/wiki/Gamma_distribution

Cumulative distribution function

Charlie Chong/ Fion Zhang


Part VA

Root Cause Analysis (RCA). Although an effort to solve a problem may utilize many of the tools, involve the appropriate people, and result in changes to the process, if the order in which the problem- solving actions occur isn‘t logically organized and methodical, much of the effort is likely to be wasted. In order to ensure that efforts are properly guided, many organizations create or adopt one or more models—a series of steps to be followed—for all such projects. More Reading: https://www.slideshare.net/oeconsulting/root-cause-analysis-by-operational-excellence-consulting

Charlie Chong/ Fion Zhang


Part VA https://www.slideserve.com/issac/root-cause-analysis-presented-by-team-incredibles

Charlie Chong/ Fion Zhang


Part VA

Root Cause Analysis (RCA)

http://www.prosolve.co.nz/root-cause-analysis/

Charlie Chong/ Fion Zhang


Part VA

Root Cause Analysis (RCA)

https://www.slideshare.net/oeconsulting/root-cause-analysis-by-operational-excellence-consulting

Charlie Chong/ Fion Zhang


Part VA

Affinity Diagram The affinity diagram is a business tool used to organize ideas and data. It is one of the Seven Management and Planning Tools. People have been grouping data into groups based on natural relationships for thousands of years; however, the term affinity diagram was devised by Jiro Kawakita in the 1960s and is sometimes referred to as the KJ Method. The tool is commonly used within project management and allows large numbers of ideas stemming from brainstorming to be sorted into groups, based on their natural relationships, for review and analysis. It is also frequently used in contextual inquiry as a way to organize notes and insights from field interviews. It can also be used for organizing other freeform comments, such as open-ended survey responses, support call logs, or other qualitative data. Process The affinity diagram organizes ideas with following steps: ď Ž Record each idea on cards or notes. ď Ž Look for ideas that seem to be related. ď Ž Sort cards into groups until all cards have been used. Once the cards have been sorted into groups the team may sort large clusters into subgroups for easier management and analysis. Once completed, the affinity diagram may be used to create a cause and effect diagram. In many cases, the best results tend to be achieved when the activity is completed by a cross-functional team, including key stakeholders. The process requires becoming deeply immersed in the data, which has benefits beyond the tangible deliverables.

https://en.wikipedia.org/wiki/Affinity_diagram

Charlie Chong/ Fion Zhang


Part VA http://gantt-chart-excel.com/tag/the-starbucks-menu-example

Charlie Chong/ Fion Zhang


Part VA Charlie Chong/ Fion Zhang


Part VA

Template

https://slidemodel.com/templates/split-arrows-diagram-template-powerpoint/

Charlie Chong/ Fion Zhang


Part VA

ď Ž Seven-step Problem- Solving Model Problem solving is about identifying root causes that have caused the problem to occur and taking actions to alleviate those causes. Following is a typical problem- solving process model and some possible activities and rationale for each step: 1. Identify the problem This step involves making sure that everyone is focused on the same issue. It may involve analysis of data to determine which problem should be worked on and writing a problem statement that clearly defines the exact problem to be addressed and where and when it occurred. A flowchart might be used to ensure that everyone understands the process in which the problem occurs. 2. List possible root causes Before jumping to conclusions about what to do about the problem, it is useful to look at the wide range of possibilities. Brainstorming and cause- and-effect analysis are often used. 3. Search out the most likely root cause This stage of the process requires looking for patterns in failure of the process. Check sheets might be used to record each failure and supporting information, or control charts may be used to monitor the process in order to detect trends or special causes. 4. Identify potential solutions Once it is fairly certain that the particular root cause has been found, a list of possible actions to remedy it should be developed. This is a creative part of the problem- solving process and may rely on brainstorming as well as input from specialists who may have a more complete understanding of the technology involved.

Charlie Chong/ Fion Zhang


Part VA

5. Select and implement a solution After identifying several possible solutions, each should be evaluated as to its potential for success, cost and timing to implement, and other important criteria. Simple processes such as ranking or multivoting, or more scientific analysis using a matrix, are likely to be used in the selection process. 6. Follow up to evaluate the effect All too often problem- solving efforts stop after remedial action has been taken. As with any good corrective action process, however, it is necessary that the process be monitored after the solution has been implemented. Control charts or Pareto diagrams are tools used to determine whether the problem has been solved. Possible findings might be that there was no effect (which may mean the solution wasn‘t properly implemented, the solution isn‘t appropriate for the root cause, or the real root cause wasn‘t found), a partial effect, or full resolution of the problem. If there was no effect, then the actions taken during the previous steps of the problem- solving model need to be reviewed in order to see where an error may have occurred. 7. Standardize the process Even if the problem has been resolved, there is one final step that needs to occur. The solution needs to be built into the process (for example, poka-yoke, training for new employees, updating procedures) so that it will continue to work once focused attention on the problem is gone. A review to see what was learned from the project is also sometimes useful.

Charlie Chong/ Fion Zhang


Part VA Possible findings might be that there was no effect (which may mean the solution wasn‘t properly implemented, the solution isn‘t appropriate for the root cause, or the real root cause wasn‘t found), a partial effect, or full resolution of the problem. If there was no effect, then the actions taken during the previous steps of the problemsolving model need to be reviewed in order to see where an error may have occurred.

https://www.slideshare.net/agnihotry/rca-for-beginners

Charlie Chong/ Fion Zhang


Part VA

 Five Whys Throughout most problem solving there is usually a significant amount of effort expended in trying to understand why things happen the way they do. Root cause analysis requires understanding how a system or process works, and the many complex contributors, both technical and human. One method for getting to root causes is to repeatedly ask ―why?‖ For example, if a car doesn‘t start when the key is turned, ask ―why?‖ Is it because the engine doesn‘t turn over or because when it does turn over, it doesn‘t begin running on its own? If it doesn‘t turn over, ask ―why?‖ Is it because the battery is too weak or because the starter is seized up? If it‘s because the battery is too weak, ask ―why?‖ Is it because the temperature outside is extremely cold, that the battery cables are loose, or because an internal light was left on the previous evening and drained the battery? Although this is a simple example, it demonstrates the process of asking why until the actual root cause is found. It is called the five whys since it will often require asking why five times or more before the actual root cause is identified. The use of data or trials can help determine answers at each level. Figure 18.23 is adapted from a healthcare facility application.

Charlie Chong/ Fion Zhang


Figure 18.23 Five whys.

Part VA Charlie Chong/ Fion Zhang


Part VA

Plan-do-check-act (PDCA/PDSA) Cycle The seven- step problem- solving model presented earlier is actually nothing more than a more detailed version of a general process improvement model originally developed by Walter Shewhart. The plan–do–check–act (PDCA) cycle was adapted by W. Edwards Deming as the plan–do–study–act (PDSA) cycle, emphasizing the role of learning in improvement. In both cases, action is initiated by developing a plan for improvement, followed by putting the plan into action. In the next stage, the results of the action are examined critically. Did the action produce the desired results? Were any new problems created? Was the action worthwhile in terms of cost and other impacts? The knowledge gained in the third step is acted on. Possible actions include changing the plan, adopting the procedure, abandoning the idea, modifying the process, amplifying or reducing the scope, and then beginning the cycle all over again. Shown in Figure 18.24, the PDCA/PDSA cycle captures the core philosophy of continual improvement.

Charlie Chong/ Fion Zhang


Figure 18.24 PDCA/PDSA cycle.

Part VA Charlie Chong/ Fion Zhang


Part VA

SIPOC Analysis Problem-solving efforts are often focused on remedying a situation that has developed in which a process is not operating at its normal level. Much of continual improvement, however, involves improving a process that may be performing as expected, but where a higher level of performance is desired. A fundamental step in improving a process is to understand how it functions from a process management perspective. This can be seen through an analysis of the process to identify the supplier–input–process–output–customer (SIPOC) linkages (see Figure 18.25). SIPOC analysis begins with defining the process of interest and listing on the right side the outputs that the process creates for customers, who are also listed. Suppliers and the inputs they provide to enable the process are similarly shown on the left side. Once this fundamental process diagram is developed, two additional items can be discussed—measures that can be used to evaluate performance of the inputs and outputs, and the information and methods necessary to control the process.

Charlie Chong/ Fion Zhang


Figure 18.25 SIPOC diagram.

Part VA Charlie Chong/ Fion Zhang


Part VA

SIPOC In process improvement, a SIPOC (sometimes COPIS) is a tool that summarizes the inputs and outputs of one or more processes in table form. The acronym SIPOC stands for suppliers, inputs, process, outputs, and customers which form the columns of the table. It was in use at least as early as the total quality management programs of the late 1980s[a] and continues to be used today in Six Sigma, lean manufacturing, and business process management.

To emphasize putting the needs of the customer foremost, the tool is sometimes called COPIS and the process information is filled in starting with the customer and working upstream to the supplier. The SIPOC is often presented at the outset of process improvement efforts such as Kaizen events or during the "define" phase of the DMAIC process. It has three typical uses depending on the audience:  To give people who are unfamiliar with a process a high-level overview  To reacquaint people whose familiarity with a process has faded or become out-of-date due to process changes  To help people in defining a new process Several aspects of the SIPOC that may not be readily apparent are:  Suppliers and customers may be internal or external to the organization that performs the process.  Inputs and outputs may be materials, services, or information.  The focus is on capturing the set of inputs and outputs rather than the individual steps in the process. Meanings: COPIS, customer, output, process, input, supplier KAIZEN: Kaizen (改善) is the Japanese word for "improvement". In business, kaizen refers to activities that continuously improve all functions and involve all employees from the CEO to the assembly line workers. It also applies to processes, such as purchasing and logistics, that cross organizational boundaries into the supply chain.[1] It has been applied in healthcare,[2] psychotherapy,[3] life-coaching, government, and banking. By improving standardized programmes and processes, kaizen aims to eliminate waste (see lean manufacturing). Kaizen was first practiced in Japanese businesses after World War II, influenced in part by American business and quality-management teachers, and most notably as part of The Toyota Way. It has since spread throughout the world and has been applied to environments outside business and productivity. https://en.wikipedia.org/wiki/Kaizen

Charlie Chong/ Fion Zhang


Figure 18.25 SIPOC diagram.

Part VA http://jimmypnufc.blogspot.com/2016/03/the-introduction-of-5s-into-cell-culture.html

Charlie Chong/ Fion Zhang


Part VA

KAIZEN 改善. As part of the Marshall Plan after World War II, American occupation forces brought in experts to help with the rebuilding of Japanese industry while the Civil Communications Section (CCS) developed a management training program that taught statistical control methods as part of the overall material. Homer Sarasohn and Charles Protzman developed and taught this course in 1949-1950. Sarasohn recommended W. Edwards Deming for further training in statistical methods. The Economic and Scientific Section (ESS) group was also tasked with improving Japanese management skills and Edgar McVoy was instrumental in bringing Lowell Mellen to Japan to properly install the Training Within Industry (TWI) programs in 1951. The ESS group had a training film to introduce TWI's three "J" programs: Job Instruction, Job Methods and Job Relations. Titled "Improvement in Four Steps" (Kaizen eno Yon Dankai) it thus introduced kaizen to Japan. For the pioneering, introduction, and implementation of kaizen in Japan, the Emperor of Japan awarded the Order of the Sacred Treasure to Dr. Deming in 1960.

Implementation The Toyota Production System is known for kaizen, where all line personnel are expected to stop their moving production line in case of any abnormality and, along with their supervisor, suggest an improvement to resolve the abnormality which may initiate a kaizen. The PDCA cycles. The cycle of kaizen activity can be defined as: "Plan → Do → Check → Act". This is also known as the Shewhart cycle, Deming cycle, or PDCA.

https://en.wikipedia.org/wiki/Kaizen

Charlie Chong/ Fion Zhang


Part VA

1910 Ginza http://www.oldtokyo.com/ginza-crossing/

Charlie Chong/ Fion Zhang


Part VA

1950 Tokyo

https://2.bp.blogspot.com/-5Pg1vrux_XQ/Vz9p-EYKbXI/AAAAAAACNDA/0Z8B1H5X1lgN78zu1E2dOsyWIGA07YUJACLcB/s1600/Tokyo-1950s-9.jpg

Charlie Chong/ Fion Zhang


Part VA

1948 Tokyo Wako building at Ginza Crossing

https://2.bp.blogspot.com/-5Pg1vrux_XQ/Vz9p-EYKbXI/AAAAAAACNDA/0Z8B1H5X1lgN78zu1E2dOsyWIGA07YUJACLcB/s1600/Tokyo-1950s-9.jpg

Charlie Chong/ Fion Zhang


Part VB

Chapter 19 Process Improvement Techniques/ Part VB _________________________ VB1. Six Sigma Ďƒ and The DMAIC Model Statistically speaking, sigma Ďƒ is a term indicating to what extent a process varies from perfection. The quantity of units processed divided into the number of defects actually occurring, multiplied by one million results in defects per million. Adding a 1.5 sigma shift in the mean results in the following defects per million: 1 sigma = 690,000 defects per million 2 sigma = 308,000 defects per million 3 sigma = 66,800 defects per million 4 sigma = 6,210 defects per million 5 sigma = 230 defects per million 6 sigma = 3.4 defects per million While much of the literature refers to defects relative to manufactured products, Six Sigma may be used to measure material, forms, a time frame, distance, computer program coding, and so on. For example: if the cost of poor quality, at a four sigma level, represented 15 percent to 20 percent of sales revenue, an organization should be concerned.

Charlie Chong/ Fion Zhang


Part VB1

Six Sigma, as a philosophy, translates to the organizational belief that it is possible to produce totally defectfree products or services—albeit more a dream than a reality for most organizations. With most organizations operating at about three sigma or below, getting to perfection leaves much work to be done. Motorola initiated the Six Sigma methodology in the 1980s. General Electric‘s CEO directed their Six Sigma initiative in 1995. Six Sigma constitutes an evolving set of principles, fundamental practices, and tools—a breakthrough strategy. The evolving Six Sigma principles are: 1. Committed and strong leadership is absolutely essential—it‘s a major cultural change. 2. Six Sigma initiatives and other existing initiatives, strategies, measures, d practices must be integrated— Six Sigma must be an integral part of how the organization conducts its business.

Charlie Chong/ Fion Zhang


Part VB1

Quantitative analysis and statistical thinking are key concepts—it‘s data- based managing. Constant effort must be applied to learning everything possible about customers and the marketplace— intelligence gathering and analysis is critical. 5. The Six Sigma approach must produce a significant payoff in a reasonable time period—real validated dollar savings is required. 6. A hierarchy of highly trained individuals with verified successes to their credit, often referred to as Master Black Belts, Black Belts, and Green Belts, are needed to extend the leadership to all organizational levels. 7. Performance tracking, measuring, and reporting systems are needed to monitor progress, allow for course corrections as needed, and link the Six Sigma approach to the organizational goals, objectives, and plans. Very often, existing performance tracking, measuring, and reporting systems fail to address the level where they are meaningful to the people involved. 8. The organization‘s reward and recognition systems must support continuous reinforcement of the people, at every level, who make the Six Sigma approach viable and successful. Compensation systems especially need to be reengineered. 9. The successful organization should internally celebrate successes frequently—success breeds success. 10. To further enhance its image, and the self- esteem of its people, the successful organization should widely publicize its Six Sigma accomplishments and, to the extent feasible, share its principles and practices with other organizations—be a member of a world- class group of organizations who have committed their efforts to achieving perfection. 3. 4.

Charlie Chong/ Fion Zhang


Part VB1

Z Score Calculator

https://www.zscorecalculator.com/

Charlie Chong/ Fion Zhang


Part VB1

The following list contains fundamental Six Sigma practices and some of the applicable tools, commonly known by the mnemonic DMAIC, which stands for: Define the customer and organizational requirements. Management prepares a team charter that includes the problem statement, scope, goals and objectives, milestones, roles and responsibilities, resources, and project timelines. In this phase, the customer, core business processes, and issues critical to quality (CTQ) are identified:  Data collection tools: check sheets, brainstorming, flowcharts;  Data analysis tools: cause- and-effect diagrams, affinity diagrams, tree diagrams, root cause analysis;  Customer data collection and analysis: QFD (quality function deployment), surveys. Measure what is critical to quality, map the process, establish measurement system, and determine what is unacceptable (defects). The team gathers data from the targeted process to establish baseline performance, then benchmarks similar processes or operations in order to define a strategy for achieving objectives:  Process control tools: control charts;  Process improvement tools: process mapping, Pareto charts, process benchmarking, TOC (theory of constraints), risk assessment, FMEA, design of experiments, cost of quality, lean thinking techniques. Analyze to develop a baseline (process capability). The data collected and process map are used to determine root causes of defects and opportunities for improvement. A number of statistical tools are used in this phase to ensure that the underlying issues affecting performance are understood and that the capability can be improved. The information collected is utilized to determine root causes and to identify opportunities for improvement:  Identify root causes of defects;  Pinpoint opportunities and set objectives. Meanings: Mnemonic- assisting or intended to assist the memory.

Charlie Chong/ Fion Zhang


Part VB1

Improve the process. This phase identifies solutions to problems through the application of advanced statistical tools and design of experiments. The solutions include performance measurements to ensure that the improvements are long term: • Project planning and management • Training Control the system through an established process. Having improved and stabilized the process, the resulting capability is determined. The goal in this phase is to ensure that the process controls are in place to maintain the improvements attained. This includes updating process and system documentation as well as establishing ongoing performance measures so that performance gains are not lost. The DMAIC phases are normally applied to a project. Individuals select or are assigned process improvement teams (PITs) and apply the Six Sigma approach. Six Sigma program/projects may contribute to corrective action, preventive action, or innovative actions. Auditors can verify benefits claimed, ensure that the program is sustained, and provide input for additional improvements using continual improvement assessment techniques. Acronym: critical to quality (CTQ) QFD (quality function deployment) TOC (theory of constraints)

Charlie Chong/ Fion Zhang


Part VB1

Tree Diagram A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements. Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal, but are also a popular tool in machine learning.

https://en.wikipedia.org/wiki/Decision_tree

Charlie Chong/ Fion Zhang


Part VB1

Affinity Diagram An affinity diagram shows the relationships between information, opinions, problems, solutions, and issues by placing them in related groups. It allows a broad range of ideas to be organized so they can be more effectively analyzed. It's also known as a KJ diagram. The History of Affinity Diagrams Affinity diagrams were invented by Jiro Kawakita in the 1960s, who called this diagram the K-J Method. They help prioritize actions and improve group decision-making when resources are limited. By the 1970s, affinity diagrams were part of what's known as the Seven Management and Planning Tools, an approach to process improvement used in Total Quality Control in Japan. Other tools include: interrelationship diagram, tree diagram, prioritization matrix, matrix diagram, process decision program chart, and activity network diagram.

https://en.wikipedia.org/wiki/Decision_tree

Charlie Chong/ Fion Zhang


Part VB1

Affinity Diagram When to use Affinity Diagrams An Affinity Diagram is useful when you want to: ď Ž Make sense out of large volumes of chaotic data ď Ž Encourage new patterns of thinking. An affinity diagram can break through traditional or entrenched thinking

https://uxdict.io/design-thinking-methods-affinity-diagrams-357bd8671ad4

Charlie Chong/ Fion Zhang


Part VB1

Affinity Diagram When to use Affinity Diagrams An Affinity Diagram is useful when you want to: ď Ž Make sense out of large volumes of chaotic data ď Ž Encourage new patterns of thinking. An affinity diagram can break through traditional or entrenched thinking

https://uxdict.io/design-thinking-methods-affinity-diagrams-357bd8671ad4

Charlie Chong/ Fion Zhang


Part VB1

Affinity Diagram When to use Affinity Diagrams An Affinity Diagram is useful when you want to: ď Ž Make sense out of large volumes of chaotic data ď Ž Encourage new patterns of thinking. An affinity diagram can break through traditional or entrenched thinking

https://www.nngroup.com/articles/affinity-diagram/

Charlie Chong/ Fion Zhang


Part VB1

Affinity Diagram When to use Affinity Diagrams An Affinity Diagram is useful when you want to: ď Ž Make sense out of large volumes of chaotic data ď Ž Encourage new patterns of thinking. An affinity diagram can break through traditional or entrenched thinking

https://www.nngroup.com/articles/affinity-diagram/

Charlie Chong/ Fion Zhang


Part VB1

Affinity Diagram When to use Affinity Diagrams An Affinity Diagram is useful when you want to: ď Ž Make sense out of large volumes of chaotic data ď Ž Encourage new patterns of thinking. An affinity diagram can break through traditional or entrenched thinking

https://www.nngroup.com/articles/affinity-diagram/

Charlie Chong/ Fion Zhang


Part VB2

VB2. Lean Lean is a strategy for achieving the shortest possible cycle time. Based on the Toyota Production System, lean manufacturing aims to increase value- added work by eliminating waste and unnecessary process steps, reducing inventory, reducing product development time, and increasing customer responsiveness while providing high- quality products as economically and efficiently as possible. The techniques employed are focused on reducing the time from the receipt of a customer‘s order to its shipment. The goal is to improve customer satisfaction, throughput time, employee morale, and profitability.

Cycle-Time Reduction Cycle time is the total amount of time required to complete a process, from the first step to the last. Today‘s methods for cycle-time reduction came about through Henry Ford‘s early focus on minimizing waste, traditional industrial engineering techniques (for example, time and motion studies), and the Japanese adaptation of these methods (often called the Toyota Production System [TPS]) to smaller production run applications. Although cycle-time reduction is best known for application to production operations, it is equally useful in nonmanufacturing environments, where person- to-person handoffs, queues of jobs, and facility layout affect productivity of knowledge workers. To be able to select where best to implement cycle-time reduction requires a high- level system analysis of the organization to determine where current performance deficits or bottlenecks are located. The organization‘s overall system has a critical path (the series of steps that must occur in sequence and take the longest time to complete). Clearly, improving a process that is not on the critical path will have no real impact on cycle time.

Charlie Chong/ Fion Zhang


Part VB2

Typical actions to shorten the cycle time of processes include: 1. Removing non-value-adding steps. 2. Speeding up value-adding steps. 3. Integrating several steps of the process into a single step; this often requires expanding the skill level of employees responsible for the new process and/or providing technical support such as a computer database. 4. Breaking the process into several smaller processes that focus on a narrower or special product. This work cell or small business unit concept allows employees to develop customer-product-focused skills and usually requires collocating ( to set or place together, especially side by side.) equipment and personnel responsible for the cell. 5. Shifting responsibility to suppliers or customers, or taking back some of the responsibility currently being performed by suppliers or customers. The practice of modular assembly is a typical example of this process. 6. Standardizing the product/service process as much as possible, then creating variations when orders are received; this allows the product to be partially processed, requiring only completion before shipment (for example, the practice of producing only white sweaters, then dyeing them just prior to shipment).

Charlie Chong/ Fion Zhang


Part VB2

Other ways of improving cycle times include improving equipment reliability (thereby reducing non-value-added maintenance downtime), reducing defects (that use up valuable resource time), and reducing unnecessary inventory. Another fundamental process for improving cycle time is that of simply better organizing the workplace. (See ―Five S‖ section.) Reducing cycle time can reduce work-in-process and finished goods inventories, allow smaller production lot sizes, decrease lead times for production, and increase throughput (decrease overall time from start to finish). Also, when process steps are eliminated or streamlined, overall quality tends to improve. Because many opportunities might be identified when beginning the effort to reduce cycle time, a Pareto analysis (discussed in Chapter 18) can be performed to decide which factors demand immediate attention. Some problems might be fixed in minutes, whereas others might require the establishment of a process improvement team and take months to complete. Although lean production is based on basic industrial engineering concepts, it has been primarily visible to U.S. companies as the Toyota Production System. The basic premise is that only what is needed should be produced, and it should only be produced when it is actually needed. Due to the amount of time, energy, and other resources wasted by how processes and organizations are designed, however, organizations tend to produce what they think they might need (for example, based on forecasts) rather than what they actually need (for example, based on customer orders).

Charlie Chong/ Fion Zhang


Part VB2

Value Stream Mapping Value stream mapping (VSM) is charting the sequence of movements of information, materials, and production activities in the value stream (all activities involving the designing, ordering, producing, and delivering of products and services to the organization‘s customers). An advantage to this is that a ―before action is taken‖ value stream map depicts the current state of the organization and enables identification of how value is created and where waste occurs. Plus employees see the whole value stream rather than just the one part in which they are involved. This improves understanding and communications, and facilitates waste elimination. A VSM is used to identify areas for improvement. At the macro level, a VSM identifies waste along the value stream and helps strengthen supplier and customer partnerships and alliances. At the micro level, a VSM identifies waste: non-value-added activities and identifies opportunities that can be addressed with a kaizen blitz Figures 19.1 and 19.2 are sample value stream maps (macro level and micro level).

Charlie Chong/ Fion Zhang


Part VB2

Value-stream Mapping Value-stream mapping is a lean-management method for analyzing the current state and designing a future state for the series of events that take a product or service from its beginning through to the customer with reduced lean wastes as compared to current map. A value stream focuses on areas of a firm that add value to a product or service, whereas a value chain refers to all of the activities within a company. At Toyota, it is known as "material- and information-flow mapping".

Purpose Of Value Stream Mapping The purpose of value stream mapping is to identify and remove or reduce "waste" in value streams, thereby increasing the efficiency of a given value stream. Waste removal is intended to increase productivity by creating leaner operations which in turn make waste and quality problems easier to identify.

Types Of Waste Daniel T. Jones (1995) identifies seven commonly accepted types of waste. These terms are updated from the Toyota production system (TPS)'s original nomenclature: 1. 2. 3. 4.

5. 6. 7.

Faster-than-necessary pace: creating too much of a good or service that damages production flow, quality, and productivity. Previously referred to as overproduction, and leads to storage and lead time waste. Waiting: any time goods are not being transported or worked on. Conveyance: the process by which goods are moved around. Previously referred to as transport, and includes double-handling and excessive movement. Processing: an overly complex solution for a simple procedure. Previously referred to as inappropriate processing, and includes unsafe production. This typically leads to poor layout and communication, and unnecessary motion. Excess Stock: an overabundance of inventory which results in greater lead times, increased difficulty identifying problems, and significant storage costs. Previously referred to as unnecessary inventory. Unnecessary motion: ergonomic waste that requires employees to use excess energy such as picking up objects, bending, or stretching. Previously referred to as unnecessary movements, and usually avoidable. Correction of mistakes: any cost associated with defects or the resources required to correct them.

https://en.wikipedia.org/wiki/Value_stream_mapping

Charlie Chong/ Fion Zhang


Part VB2

Waste Removal Operations Monden (1994) identifies three types of operations:  Non-value adding operations (NVA): actions that should be eliminated, such as waiting.  Necessary but non-value adding (NNVA): actions that are wasteful but necessary under current operating procedures  Value-adding (VA): conversion of processing of raw materials via manual labor.

Applications Value-stream mapping has supporting methods that are often used in Lean environments to analyze and design flows at the system level (across multiple processes). Although value-stream mapping is often associated with manufacturing, it is also used in logistics, supply chain, service related industries, healthcare, software development, product development, and administrative and office processes.[

In a build-to-the-standard form, Shigeo Shingo suggests that the value-adding steps be drawn across the centre of the map and the non–value-adding steps be represented in vertical lines at right angles to the value stream. Thus, the activities become easily separated into the value stream, which is the focus of one type of attention, and the 'waste' steps, another type. He calls the value stream the process and the non-value streams the operations. The thinking here is that the non–value-adding steps are often preparatory or tidying up to the value-adding step and are closely associated with the person or machine/workstation that executes that valueadding step. Therefore, each vertical line is the 'story' of a person or workstation whilst the horizontal line represents the 'story' of the product being created. Value stream mapping is a recognised method used as part of Six Sigma methodologies.

https://en.wikipedia.org/wiki/Value_stream_mapping

Charlie Chong/ Fion Zhang


Part VB2

Value-stream mapping usually employs standard symbols to represent items and processes, therefore knowledge of these symbols is essential to correctly interpret the production system problems.

http://courses.washington.edu/ie337/Value _Stream_Mapping.pdf

https://en.wikipedia.org/wiki/Value_stream_mapping

Charlie Chong/ Fion Zhang


Part VB2

Read More: Value-stream mapping Pioneered by Toyota in the 1940s, Lean thinking revolutionized the manufacturing industry, improving collaboration, communication, and flow on production lines. Value stream mapping is the Lean tool Toyota used to define and optimize the various steps involved in getting a product, service, or value-adding project from start to finish.

http://courses.washington.edu/ie337/Value_Stream_Mapping.pdf

Charlie Chong/ Fion Zhang


Part VB2

Figure 19.1 Value stream map—macro level (partial).

Charlie Chong/ Fion Zhang


Part VB2

Figure 19.2 Value stream map—plant level (partial).

Charlie Chong/ Fion Zhang


Part VB2

Five S The Japanese use the term Five S for five practices for maintaining a clean and efficient workplace: 1. Seiri Separate needed tools, parts, and instructions from unneeded materials; remove latter. 2. Seiton Neatly arrange and identify parts and tools for ease of use. 3. Seiso Conduct a cleanup campaign. 4. Seiketsu As a habit, beginning with self, then the workplace, be clean and tidy. 5. Shitsuke Apply discipline in following procedures. Note: The typical English words for Five S are Sort, Set, Shine, Standardize, Sustain. Far more than the good things they do, the Five Ss can:  Build awareness of the concept and principles of improvement  Set the stage to begin serious waste reduction initiatives  Break down barriers to improvement, at low cost  Empower the workers to control their work environment

Sort

Set

Shine

Standardize Sustain

Charlie Chong/ Fion Zhang


Part VB2

Five S

Charlie Chong/ Fion Zhang


Part VB2 Charlie Chong/ Fion Zhang


Part VB2

Visual Management This method is used to arrange the workplace, all tools, parts, and material, and the production process itself, so that the status of the process can be understood at a glance by everyone. Further, the intent is to furnish visual clues to aid the performer in correctly processing a step or series of steps, reduce cycle time, cut costs, smooth work flow, and improve quality. By seeing the status of the process, both the performer and management have an up-to-the-second picture of what has happened, what‘s presently happening, and what‘s to be done. Advantages of visual management are:  Catches errors and defects before they can occur  Quick detection enables rapid correction  Identifies and removes safety hazards  Improves communications  Improves workplace efficiency  Cuts costs Examples of visual management are:  Color-coded sectors on meter faces to indicate reading acceptance range, low and high unacceptable readings  Electronic counters mounted over work area to indicate rate of accepted finished product  Work orders printed on colored paper where the color denotes the grade and type of metal to be used  Lights atop enclosed equipment indicating status of product being processed  Slots at dispatch station for pending work orders, indicating work to be scheduled (and backlog) and when the work unit(s) will become idle  Painted floor and/or wall space with shadow images of the tool, die, or pallet that usually occupies the space when not in use


Part VB2

Visual Management

Charlie Chong/ Fion Zhang


Part VB2

Visual Management

Charlie Chong/ Fion Zhang


Part VB2

Visual Management

Charlie Chong/ Fion Zhang


Part VB2 Charlie Chong/ Fion Zhang


Part VB2

Visual Management

Charlie Chong/ Fion Zhang


Part VB2

Visual Management

Charlie Chong/ Fion Zhang


Part VB2

Visual Management

Charlie Chong/ Fion Zhang


Part VB2

Warehouse Visual Management

Charlie Chong/ Fion Zhang


Part VB2

Waste Reduction Lean production focuses on reducing waste and goes against traditional mass production thinking by defining waste as anything that does not add value. Waste is frequently a result of how the system is designed. The Japanese word for waste is muda Examples of seven types of waste are: 1. Overproduction  Enlarging number of requirements beyond customers‘ needs  Including too much detail or too many options in designs  Specifying materials that require sole- source procurement or that call for seeking economy- of-scale– oriented procurement  Requiring batch processing, lengthy and costly setups, or low yield processes 2. Delays, waiting  Holdups due to people, information, tools, and equipment not being ready  Delays waiting for test results to know if a part is made correctly  Unrealistic schedules resulting in backups in manufacturing flow  Part improperly designed for manufacture, design changes

Charlie Chong/ Fion Zhang


Part VB2

3. Transportation  Non-value-added transport of work in process  Inefficient layout of plant causing multiple transports  Specifying materials from suppliers geographically located a great distance from manufacturing facility, resulting in higher shipping costs 4. Processing  Non-value-added effort expended  Designers failed to consider production process capabilities (constraints, plant capacities, tolerances that can be attained, process yield rate, setup time and complexity, worker skills and knowledge, storage constraints, material handling constraints)  Designs too complex  Pulse (takt time) of production flow too high or too low in relation to customer demand 5. Excess inventory  Stockpiling more materials than are needed to fulfill customer orders —Material handling to store, retrieve, store . . . as part proceeds in the process  Unreliable production equipment so safety stock is desired

Charlie Chong/ Fion Zhang


Part VB2

6. Wasted motion  Non-value-added movements, such as reaching, walking, bending, searching, sorting  Product designs that are not manufacturing- friendly  Requirements for lifting cumbersome and/or heavy parts  Manufacturing steps that require many positioning- type moves 7. Defective parts  Corrective actions, looking for root cause  Scrap  Downgrading defectives (reducing price, seeking a buyer) in order to recover some of the cost of manufacture  Faulty design causing defects  Excessive tolerances creating more defectives

Charlie Chong/ Fion Zhang


Part VB2

Inventories (buffer stock and batch and queue processing) can be a huge waste. Consider this example: The objective is to complete 200 pieces.  The process consists of three operations of 10 seconds each. If the 200 pieces are processed as a batch, the total cycle time is 6000 seconds (200 × 30 seconds).  In a single-piece flow mode, there is no accumulation of material between steps. Cycle time is 30 seconds for a single piece (approximately 2000 seconds total). The reduction in total cycle time is 67 percent. Work in process has also been reduced from 600 pieces to 3 pieces. Analyses of processes for waste usually involve diagrams that show the flow of materials and people and document how much time is spent on value-added versus non-value- added activity.

 

The objective is to complete 200 pieces. The process consists of three operations of 10 seconds each. If the 200 pieces are processed as a batch, the total cycle time is 6000 seconds (200 × 30 seconds). In a single-piece flow mode, there is no accumulation of material between steps. Cycle time is 30 seconds for a single piece (approximately 2000 seconds total). The reduction in total cycle time is 67 percent. Work in process has also been reduced from 600 pieces to 3 pieces.

Charlie Chong/ Fion Zhang


Part VB2

Cycle time is affected by both visible and invisible waste. Examples of visible waste are:  Out-of-spec incoming material. For example, an invoice from a supplier has incorrect pricing or aluminum sheets are the wrong size.  Scrap. For example, holes are drilled in the wrong place or shoe soles are improperly attached.  Downtime. For example, school bus is not operable or process 4 cannot begin because of backlog at 3.  Product rework. For example, failed electrical continuity test or customer number is not coded on invoice. Examples of invisible waste are:  Inefficient setups. For example, jig requires frequent retightening or incoming orders not sorted correctly for data entry.  Queue times of work in process. For example, an assembly line is not balanced to eliminate bottlenecks (constraints) or an inefficient loading zone protocol slows school bus unloading, causing late classes.

Charlie Chong/ Fion Zhang


Part VB2

 Unnecessary motion. For example, materials for assembly are located out of easy reach or workers need to bring each completed order to dispatch desk.  Wait time of people and machines. For example, utility crew (three workers and truck) waiting until a parked auto can be removed from work area or planes are late in arriving due to inadequate scheduling of available terminal gates.  Inventory. For example, obsolete material returned from distributor‘s annual clean- out is placed in inventory anticipating possibility of a future sale or, to take advantage of quantity discounts, a year‘s supply of paper bags is ordered and stored.  Movement of material, work in progress (WIP), and finished goods. For example, in a function- oriented plant layout, WIP has to be moved from 15 to 950 feet to next operation or stacks of files are constantly being moved about to gain access to filing cabinets and machines.  Overproduction. For example, because customers usually order the same item again, an overrun is produced to place in inventory just in case or extras are made at earlier operations in case they are needed in subsequent operations.  Engineering changes. For example, problems in production necessitate engineering changes or failure to clearly review customer requirements causes changes.  Unneeded reports. For example, a report initiated five years ago is still produced each week even though the need was eliminated four years ago or a hard copy report duplicates the same information available on a computer screen.  Meetings that add no value. For example, a morning production meeting is held each day whether or not there is a need (coffee and danish is served) or 15 people attend a staff meeting each week where one of the two hours is used to solve a problem usually involving less than one-fifth of the attendees.  Management processes that take too long or have no value. For example, all requisitions (even for paper clips) must be signed by a manager or a memo to file must be prepared for every decision made between one department and another.

Charlie Chong/ Fion Zhang


Part VB2

Mistake-Proofing Mistake-proofing originated in Japan as an approach applied to factory processes. It was perfected by Shigeo Shingo as poka-yoke. It is also applicable to virtually any process in any context. For example, the use of a spellchecker in composing text on a computer is an attempt to prevent the writer from making spelling errors (although we have all realized it isn‘t foolproof). This analytical approach involves probing a process to determine where human errors could occur. Then each potential error is traced back to its source. From these data, consider ways to prevent the potential error. Eliminating the step is the preferred alternative. If a way to prevent the error cannot be identified, then look for ways to lessen the potential for error. Finally, choose the best approach possible, test it, make any needed modifications, and fully implement the approach. Mistakes may be classified into four categories:  Information errors • Information is ambiguous • Information is incorrect • Information is misread, misinterpreted, or mis-measured • Information is omitted • There‘s inadequate warning

Charlie Chong/ Fion Zhang


Part VB2

 Misalignment • Parts are misaligned • A part is misadjusted • A machine or process is mistimed or rushed  Omission or commission • Material or part is added • Prohibited and/or harmful action is performed • Operation is omitted • Parts are omitted, so there‘s a counting error  Selection errors • A wrong part is used • There is a wrong destination or location • There‘s a wrong operation • There‘s a wrong orientation Mistake-proofing actions are intended to:  Eliminate the opportunity for error  Detect potential for error  Prevent an error

Charlie Chong/ Fion Zhang


Part VB2

Let’s Look at Some Examples. 1. In the first situation, a patient is required to fill out forms at various stages of diagnosis and treatment (the ubiquitous clipboard treatment). The patient is prone to making errors due to the frustration and added anxiety of filling out subsequent forms. After analyzing the situation, the solution is to enter initial patient data into a computer at first point of patient‘s arrival. Add to the computer record as the patient passes through the different stages with different doctors and services. When referrals are made to doctors outside the initial facility, send an electronic copy of the patient‘s record (e-mail) to the referred doctor. Except to correct a previous entry, the intent is to never require the patient to furnish the same data more than once. Considering the four types of mistakes, we can see that information was omitted or incorrectly entered at subsequent steps. The solution eliminates resubmitting redundant data. 2.

In the second example, a low- cost, but critical part is stored in an open bin for access by any operator in the work unit. While there is a minimum on-hand quantity posted and a reorder card is kept in the bin, the bin frequently is empty before anyone takes notice. The mistake is that there‘s inadequate warning in receiving vital information. The solution is to design and install a spring- loaded bin bottom that is calibrated to trigger an alarm buzzer and flashing light when the minimum stock level is reached. The alarm and light will correct the mistake.

3.

In the last example, there is a potential to incur injury from the rotating blades when operators of small tractor- mowers dismount from a running tractor. The solution is to install a spring- actuated tractor seat that shuts off the tractor motor as soon as weight is removed. Using this tractor seat will prevent a harmful action. Careful elimination, detection, and prevention actions can result in near 100 percent quality. Unintended use, ignorance, or willful misuse or neglect by humans may still circumvent safeguards, however. For example, until operating a motor vehicle is prevented until all seatbelts are securely fastened, the warning lights and strict law enforcement alone won‘t achieve 100 percent effectiveness. Continually improve processes and mistake- proofing efforts to strive for 100 percent.

Charlie Chong/ Fion Zhang


Part VB2 https://www.slideshare.net/timothywooi/pokayoke-a-lean-strategy-to-mistake-proofing

Charlie Chong/ Fion Zhang


Part VB2

No Human Error.

https://www.slideshare.net/timothywooi/pokayoke-a-lean-strategy-to-mistake-proofing

Charlie Chong/ Fion Zhang


Part VB2

Setup/Changeover Time Reduction The long time required to change a die in a stamping operation meant that a longer production run would be required to absorb the downtime caused by the changeover. To address this, the Japanese created a method for reducing setup times called single minute exchange of die (SMED), also referred to as rapid exchange of tooling and dies (RETAD). Times required for a die change were dramatically reduced, often reducing changeover time from several hours to minutes. To improve setup/changeover times, initiate a plan–do–check–act approach: 1. 2. 3. 4. 5.

Map the processes to be addressed. (Videotaping the setup/changeover process is a useful means for identifying areas for improvement.) Collect setup data times for each process. Establish setup time reduction objectives. Identify which process is the primary overall constraint (bottleneck). Prioritize remaining processes by magnitude of setup times, and target next process by longest time. Remove non- value-adding activities from the targeted process (for example, looking for tools required for the changeover). Trischler lists dozens of non- value-added activities, most of which are applicable in a variety of industries and processes. Note that there are some steps that fit the non-value-added category that cannot actually be removed, but can be speeded up.

Charlie Chong/ Fion Zhang


Part VB2

6.

Identify setup steps that are internal (steps the machine operator must perform when the machine is idle) versus steps that are external (steps that can be performed while the machine is still running the previous part; for example, removing the fixture or materials for the next part from storage and locating them near the machine). Internal set-up step: steps the machine operator must perform when the machine is idle. External set-up step: steps that can be performed while the machine is still running the previous part.

7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.

Focus on moving internal steps to external steps where possible. Identify setup activities that can be done simultaneously while the machine is down (the concept of an auto racing pit crew). Speed up required activities. Standardize changeover parts (for example, all dies have a standard height, all fasteners require the same size wrench to tighten). Store setup parts (dies and jigs) on portable carts in a position and at the height where they can be readily wheeled into place at the machine and the switchover accomplished with minimum movement and effort. Store all setup tools to be used in a designated place within easy reach (for example, visual shadow areas on a portable tool cart). Error-proof the setup process. Evaluate the setup/changeover process and make any modifications needed. Return to step 4 and repeat sequence until all setup times have been improved. Fully implement setup/changeover procedures. Evaluate effectiveness of setup/changeover time reduction efforts against objectives set in step 3. Collect setup time data periodically and initiate improvement effort again; return to step 1.

Charlie Chong/ Fion Zhang


Part VB2

Spiral Welded API Pipe mill

Charlie Chong/ Fion Zhang


Part VB2

Total Productive Maintenance Maintenance typically may follow one of the following scenarios:  Equipment is repaired when it routinely produces defectives. Maintenance is left to the maintenance crew‘s discretion. (Fix it when it breaks.)  Equipment fails and maintenance is performed while the equipment is down for repairs. (Maintenance if and when the opportunity presents itself.)  Equipment maintenance is on a predetermined schedule based on equipment manufacturer‘s recommendations or all maintenance is scheduled to be done during the annual plant shutdown (preventive maintenance).  Operators are trained to recognize signs of deterioration (wear, loose fixtures and fasteners, missing bolts and nuts, accumulated shavings from the process, accumulated dirt and dust, over or under lubrication, excessive operating noise, excess vibration, spilled coolants, leaks, clogged drains, valves and switches not working correctly, tooling showing signs of excessive wear, increasing difficulty in maintaining tolerances) and act to eliminate or reduce the conditions (autonomous maintenance).  Statistical analysis is used to determine the optimum time between failures. The equipment is scheduled for maintenance at a reasonable interval before failure is likely to occur (predictive maintenance).  Maintenance is eliminated or substantially reduced by improving and redesigning the equipment to require low or no maintenance (maintenance prevention).

Charlie Chong/ Fion Zhang


Part VB2

Total productive maintenance (TPM) is an organization- wide effort aimed at reducing loss due to equipment failure, slowing speed, and defects. TPM is critical to a lean operation in that there is minimum to no buffer stock to counter the effect of equipment malfunctions and downtime. The purposes of TPM are to:  Achieve the maximum effectiveness of equipment  Involve all equipment operators in developing maintenance skills  Improve the reliability of equipment  Reduce the size and cost of a maintenance staff  Avoid unplanned equipment downtime and its associated costs  Achieve an economic balance between prevention costs and total costs while reducing failure costs

Charlie Chong/ Fion Zhang


Part VB2

Total productive maintenance (TPM) Involve all equipment operators in developing maintenance skills

Charlie Chong/ Fion Zhang


Part VB2

Total productive maintenance (TPM) - Involve all equipment operators in developing maintenance skills

Charlie Chong/ Fion Zhang


Part VB2

Total productive maintenance (TPM) - Involve all equipment operators in developing maintenance skills

Charlie Chong/ Fion Zhang


Part VB2

Management‘s role in TPM is to:  Enthusiastically sponsor and visibly support the TPM initiative  Provide for documented work instructions to guide operators‘ TPM activities  Provide for the training operators need to perform maintenance and minor repairs of the equipment assigned to them  Provide for the resources operators need (special tools, cleaning supplies)  Provide for the operators‘ time to perform TPM activities  Adjust the compensation system as necessary to reinforce operators‘ TPM performance  Provide for establishing the metrics that are used to monitor and continually improve TPM, including developing the economic case for TPM  Provide for appropriate and timely operator performance feedback and recognition for work done well relative to TPM activities

Charlie Chong/ Fion Zhang


Part VB2

Kaizen Blitz/Event

Kaizen is a Japanese word 改善. It has come to mean a continual and incremental improvement (as opposed to reengineering, which is a breakthrough, quantum- leap approach). A kaizen blitz or kaizen event is an intense process often lasting three to five consecutive days. It introduces rapid change into an organization by using the ideas and motivation of the people who do the work. It has also been called zero investment improvement. In a kaizen event, a cross- functional team focuses on a target process, studies it, collects and analyzes data, discusses improvement alternatives, and implements changes. The emphasis is on making the process better, not necessarily perfect. Sub-processes that impact cycle time are a prime target on which to put the synergy of a kaizen team to work. The typical stages of a kaizen event are:  Week before blitz • Wednesday Train three or four facilitators in kaizen blitz techniques and tools, as well as enhance their facilitation skill level. • Thursday Target the process to be addressed. • Friday Gather initial data on the present targeted process.  Blitz week • Monday Train the participants in kaizen blitz techniques and tools. • Tuesday Training (AM), process mapping present state (PM). • Wednesday Process mapping future state. Eliminating non-value added steps and other waste. Eliminating bottlenecks. Designing new process flow. • Thursday Test changes, modify as needed. • Friday Implement the new work flow, tweak the process, document the changes, and be ready for fullscale production on Monday. Prepare follow- up plan.  Post blitz • Conduct follow- up evaluation of change (at an appropriate interval). — Plan the next blitz.

Charlie Chong/ Fion Zhang


Part VB2

Kaizen Blitz The Blitz was a German bombing offensive against Britain in 1940 and 1941, during the Second World War. The term was first used by the British press and is the German word for 'lightning'.

Charlie Chong/ Fion Zhang


Part VB2

Me 262-Blitz The Messerschmitt Me 262, nicknamed Schwalbe (German: "Swallow") in fighter versions, or Sturmvogel (German: "Storm Bird") in fighter-bomber versions, was the world's first operational jet-powered fighter aircraft. Design work started before World War II began, but problems with engines, metallurgy and top-level interference kept the aircraft from operational status with the Luftwaffe until mid-1944. The Me 262 was faster and more heavily armed than any Allied fighter, including the British jet-powered Gloster Meteor.[5] One of the most advanced aviation designs in operational use during World War II,[6] the Me 262's roles included light bomber, reconnaissance and experimental night fighter versions.

https://en.wikipedia.org/wiki/Messerschmitt_Me_262

Charlie Chong/ Fion Zhang


Part VB2

V2 Rocket-Blitz The V-2 (German: Vergeltungswaffe 2, "Retribution Weapon 2"), technical name Aggregat 4 (A4), was the world's first longrange[4] guided ballistic missile. The missile, powered by a liquid-propellant rocket engine, was developed during the Second World War in Germany as a "vengeance weapon", assigned to attack Allied cities as retaliation for the Allied bombings against German cities. The V-2 rocket also became the first man-made object to travel into space by crossing the Kรกrmรกn line with the vertical launch of MW 18014 on 20 June 1944.

https://en.wikipedia.org/wiki/Messerschmitt_Me_262

Charlie Chong/ Fion Zhang


Part VB2

Kanban 看板 This method is used in a process to signal an upstream supplier (internal or external) that more material or product is needed downstream. Originally it was just a manual card system, but has evolved to more sophisticated signaling methods for some organizations. It is referred to as a pull system because it serves to pull material or product from a supplier rather than relying on a scheduling system to push the material or product forward at predetermined intervals. It is said that the kanban method was inspired by Toyota‘s Taiichi Ohno‘s visit to a U.S. supermarket.

Charlie Chong/ Fion Zhang


Part VB2

Kanban 看板- War Room

Charlie Chong/ Fion Zhang


Part VB2

Kanban 看板- War Room

Charlie Chong/ Fion Zhang


Part VB2

Just-in-time Just-in-time (JIT) is a material requirements planning system that provides for the delivery of material or product at the exact time and place where the material or product will be used. Highly coordinated delivery and production systems are required to match delivery to use times. The aim is to eliminate or reduce on- hand inventory (buffer stock) and deliver material or product that requires no or little incoming inspection.

Charlie Chong/ Fion Zhang


Part VB2

Just-in-time- an Inventory Lean Strategy. Just-In-Time (JIT): in lean manufacturing , JIT is very important as it means to supply at the right time the right product with the right quantity etc. In football game , supplying the ball to the right person at the right time with the right force will surely helps in winning the game. JIT, a methodology aimed primarily at reducing flow times within production system as well as response times from suppliers and to customers, denotes a manufacturing system in which materials or components are delivered immediately before they are required in order to minimize inventory costs. JIT, a production model in which items are created to meet demand, not created in surplus or in advance of need, is an inventory strategy companies employ to increase efficiency and decrease waste by receiving goods only as they are needed in the production process, thereby reducing inventory costs.

https://www.crcpress.com/authors/news/i3151-happy-new-year-hansei-on-lean-manufacturing-at-new-years-eve

Charlie Chong/ Fion Zhang


Part VB2

Takt Time Takt time is the total work time available (per day or per shift) divided by the demand requirements (per day or per shift) of customers. Takt time establishes the production pace relative to the demand. For example, let‘s say customer orders (demand) average 240 units per day. The production line runs on one shift (480 minutes) per day; so takt time is two minutes. To meet demand one unit must be completed every two minutes. Figure 19.3 shows an analysis of actual time versus takt time for a process consisting of four operations. Takt time is the average time between the start of production of one unit and the start of production of the next unit, when these production starts are set to match the rate of customer demand. For example, if a customer wants 10 units per week, then, given a 40-hour work week and steady flow through the production line, the average time between production starts should be 4 hours (actually less than that in order to account for things like machine downtime and scheduled paid employee breaks), yielding 10 units produced per week. Note, a common misconception is that takt time is related to the time it takes to actually make the product. In fact, takt time simply reflects the rate of production needed to match the demand. In the previous example, whether it takes 4 minutes or 4 years to produce the product, the takt time is based on customer demand. If a process or a production line are unable to produce at takt time, either demand leveling, additional resources, or process reengineering is needed to correct the issue. https://en.wikipedia.org/wiki/Takt_time

Charlie Chong/ Fion Zhang


Part VB2

Figure 19.3 Takt time analysis.

Charlie Chong/ Fion Zhang


Part VB2

Line Balancing Line balancing is the method of proportionately distributing workloads within the value stream to meet takt time. The analysis begins with the current state. A balance chart of work steps, time requirements, and operators for each workstation is developed. It shows improvement opportunities by comparing the time of each operation to takt time and total cycle time. Formulae are used to establish a proposed- state balanced line.

Charlie Chong/ Fion Zhang


Part VB2

Line Balancing (Production and Operations Management) ―Line Balancing‖ in a layout means arrangement of machine capacity to secure relatively uniform flow at capacity operation. It can also be said as ―a layout which has equal operating times at the successive operations in the process as a whole‖. Product layout requires line balancing and if any production line remains unbalanced, machinery utilization may be poor. Let us assume that there is a production line with work stations x, y and z. Also assume that each machine at x, y and z can produce 200 items, 100 items, and 50 items per hour respectively. If each machine were to produce only 50 items per hour then each hour the machines at x and y would be idle for 45 and 30 minutes respectively. Such a layout will be unbalanced one and the production line needs balancing. http://www.businessmanagementideas.com/industries/plant-layout/line-balancing-meaning-and-methods/6784

http://www.me.nchu.edu.tw/lab/CIM/www/courses/Flexible%20Manufacturing%20Systems/Microsoft%20Word%20-%20Chapter8F-ASSEMBLY%20SYSTEMS%20AND%20LINE%20BALANCING.pdf

Charlie Chong/ Fion Zhang


Part VB2

Line Balancing (Production and Operations Management) a. b.

Suppose that a five station line had station process times of 1 min at all but the fifth station is 2 minutes. Then the production rate Rc = 30 units/hr, if two station were arranged in parallel at the fifth station, the output could be increase Rc = 60 units/hr.

http://www.me.nchu.edu.tw/lab/CIM/www/courses/Flexible%20Manufacturing%20Systems/Microsoft%20Word%20-%20Chapter8F-ASSEMBLY%20SYSTEMS%20AND%20LINE%20BALANCING.pdf

Charlie Chong/ Fion Zhang


Part VB2

Standardized Work Standardized work consists of agreed-to work instructions that utilize the best known methods and sequence for each manufacturing or assembly process. Establishing standardized work supports productivity improvement, high quality, and safety of workers.

Charlie Chong/ Fion Zhang


Part VB2

Single-piece Flow One-piece flow is a product moving through the process one unit at a time. This approach differs from batch processing that produces batches of the same item at a time, moving the product through the process batch by batch. Advantages of single-piece flow are:  It cuts the elapsed time between the customer‘s order and shipment of the order;  It reduces or eliminates wait time delays between processing of batches;  It reduces inventory, labor, energy, and space required by batch- and-queue processing;  It reduces product damage caused by handling and temporary storing of batches;  It enables the detection of quality problems early in the process;  It allows for flexibility in meeting customer demands;  It enable identification of non-value-added steps, thereby eliminating waste.

Charlie Chong/ Fion Zhang


Part VB2

Single Piece Flow For the ease of understanding, this author is using the word ‗piece‘ to mean (in the generic sense) the making of a tangible product. It could be an ice cream cone, a widget, or an automobile. One of the tenets of lean is single piece flow. Instead of building up a stack of inventory between the steps in the process, the idea with single piece flow (also known as one piece flow), is to build at the pulse rate of customer demand. This pulse rate of customer demand, known as takt time, ebbs and flows over time. With single piece flow, the idea is to make a piece only when the customer asks for one.

https://www.gembaacademy.com/promos/one-piece-flow-simulation

Charlie Chong/ Fion Zhang


Part VB2

Single-Piece Flow. Benefits of One Piece Flow and How It Is Implemented. In ―One piece flow‖ production, the product transfers from one phase to the next phase with one piece at a time. This approac h is different from lot production where a number of units are prepared at an agreed stage and then every unit is moved to the next level at the same time. One piece movement is encouraged by majority of the operational excellence practitioners. Benefits of One Piece Flow The manufacturer can enjoy lots of benefits by implementing one piece flow as no idle time is there between the units. Minimize Financial Loss The piece that is completed first cannot transfer to the next stage with lot production until the last piece is finished in the lot. So in that scenario it is quite evident that first piece remains unusable until the full lot is processed. One piece flow also allows the manufacturer to stop the production process earlier in case of any defect and the problem can be fixed. Moreover, the defect happens just because of that existing unit. The defect can be avoided in the next units all together because the problem is fixed immediately by the manufacturer. In this way the manufacturer is able to save any kind of financial loss. Improves Flexibility Level In addition to that, one piece flow improves the flexibility level in a great way because it is quicker as compared to batch and queue. Due to the factors of the one piece flow being faster, it is possible to wait longer to plan the order and yet deliver in time. Consequently, it allows us to comply with last minutes changes from the customer and as it‘s very common that despite of the industry we work in customers love to change their preferences. Sampling inspection takes place after the production of a certain step with lot production. If a defect is yielded during inspection then the complete lot is suspected. In order to find that defect all parts must be reviewed in the lot because we can expect more defects in the lot. Minimize Wastes If an organization wants to get rid of the eight categories of waste then all individual activities process must be united and synchronized by implementing one piece flow. To accomplish this purpose there will be need of improved designs so that the travel distance between successive operations can be reduced. In the implementation of one piece flow the most common approach is known as Work Cell. Workstations are moved near to each other for decreasing transport between them. In a conventional plant setting the manufacturing departments carry out specific tasks including grinding, welding, fabrication, drilling, and assembly by utilizing its specific workforce that has got the single skill. The main emphasis of the Work Cell approach is the flow of product and the individuals manage themselves in accordance with the demands of customer, by altering the method in which work content is separated. On the other hand, the manufacturing cells are intended to offer entire products to an inner work cell or an outside customer. More to the point, work cells execute a number of procedures or tasks and call for multi-skilled workforce who has got the quality of flexibility. Decreases operator movements Furthermore, the operator movements are also reduced when the cell is U-shaped and in that case the work cell can be referred as the U-cell as well. So to complete each and every process activities in the least amount of physical space the formation of U-shaped work cells is a must have and most importantly they must be linked. The standards of high-quality work cell design incorporates organizing the work in order, utilizing a flow in counter clockwise, placing machines and processes close as one, and putting the last operation near to the initial operation. For producing the approval for manufacturing the work cell must be planned to achieve line balance with regard to takt time, and Kanban. The work cell employees are authorized by the organization to take all necessary measures to meet the needs of internal and external customers. The amount of persons within the cell establishes the quantity of WIP (Work in Process) in the work cell and the cycle time. In some companies the employees face a multifaceted environment where customer demand differs and various production lines are offered for diverse product families. In that case by simply adding up and eliminating people from work cells the principle of Work Cell can be helpful to adjust the cycle times so a proper response to these changes of demands can be complied. Depending on the kind of process, whether or not, work cells can be implemented in two or more workplaces.[

http://www.latestquality.com/one-piece-flow/

Charlie Chong/ Fion Zhang


Part VB2

Cellular Operations A work cell is a self- contained unit dedicated to performing all the operations to complete a product or a major portion of a production run. Equipment is configured to accomplish:  Sequential processing  Counterclockwise flow to enable operators to optimize use of their right hands as operators move through the cell (moving the part to each subsequent operation)  Shorter movements by close proximity of machines  Position the last operation close to the first operation for the next part  Adaptability of cell to accommodate customers‘ varying demands The most prevalent layout is a U shape (see Figure 19.4), although L, S, and V shapes have been used. Product demand, product mix, and constraints are all considerations in designing a work cell.

Charlie Chong/ Fion Zhang


Part VB2

Figure 19.4 Typical U-shape cell layout.

Charlie Chong/ Fion Zhang


Part VB2

Cellular Operations Cellular Flow Manufacturing is a method of organizing manual and machine operations in the most efficient combination to maximize value-added content and minimize waste. Cellular Manufacturing Benefits  Simplified scheduling and communication  Minimal inventory needed between processes  Increased visibility  provide quick feedback and problem resolution  Development of increased product knowledge  workers are trained to understand the total process  Shorter lead times  Small lots and one piece flow to match customer demand

http://www.webpages.uidaho.edu/mindworks/Lean/Lecture%20Notes/ME%20410%20Lecture%20Slides%2007%20Cell%20Design.pdf

Charlie Chong/ Fion Zhang


Part VB2

Cellular Operations  Concept of performing all of the necessary operations to make a component, subassembly, or finished product in a work cell.  Basic assumption is that product or part families exist and that the combined volume of products in the family justifies dedicating machines and workers to focused work-cells.  Basic building blocks of cells • Workstations • Machines • Workers • Tools, gages, and fixtures • POU materials storage • Materials handling between work stations

http://web.utk.edu/~kkirby/IE527/Ch10.pdf

Charlie Chong/ Fion Zhang


Part VB2 https://youtu.be/WQFvO67oN0k Published time: 3 Oct, 2018 06:03

https://www.rt.com/usa/440180-trump-saudi-security-two-weeks/

Charlie Chong/ Fion Zhang


Part VC

Chapter 20 Basic Statistics/Part VC _________________________ Descriptive statistics furnish a simple method of extracting information from what often seems at first glance to be a mass of random numbers. These characteristics of the data may relate to: 1. Typical, or central, value (mean, median, mode) 2. A measure of how much variability is present (variance, standard deviation) 3. A measure of frequency (percentiles) Statistics is concerned with scientific methods for collecting, organizing, summarizing, presenting, and analyzing data, as well as drawing valid conclusions and making reasonable decisions on the basis of such analysis. In a narrower sense, the term statistics is used to denote the data themselves or numbers derived from the data, such as averages. An auditor must look at how an auditee defines the process and necessary controls, and must establish some type of measurement system to ensure that the measurements or the process was properly defined. The auditor looks at the results of what other people have done, and if they used statistical tools, the auditor must be knowledgeable enough to decide whether the information being gathered from the data is valid. Descriptive Statistics The phase of statistics that seeks only to describe and analyze a given group (sample) without drawing any conclusions or inferences about a larger group (population) is referred to as deductive or descriptive statistics. Measures of central tendency and dispersion are the two most fundamental concepts in statistical analysis.

Charlie Chong/ Fion Zhang


Part VC

Descriptive Statistic Are numbers that are used to summarized and descried data. The word ―Data‖ refers to the information that has been collected from experiment, a survey, a historical record, etc.

Inferential Statistic In inferential statistic the samples were set of data taken from the population to represent the population. Probability distribution, hypothesis testing, correlation testing and regression analysis all fall under the category of inferential statistics, https://www.slideshare.net/ShayanZahid1/descriptive-statistics-and-inferential-statistics

Charlie Chong/ Fion Zhang


Part VC

Descriptive Statistics Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data. Descriptive statistics do not, however, allow us to make conclusions beyond the data we have analysed or reach conclusions regarding any hypotheses we might have made. They are simply a way to describe our data. Descriptive statistics are very important because if we simply presented our raw data it would be hard to visulize what the data was showing, especially if there was a lot of it. Descriptive statistics therefore enables us to present the data in a more meaningful way, which allows simpler interpretation of the data. For example, if we had the results of 100 pieces of students' coursework, we may be interested in the overall performance of those students. We would also be interested in the distribution or spread of the marks. Descriptive statistics allow us to do this. Typically, there are two general types of statistic that are used to describe data: Measures of central tendency: these are ways of describing the central position of a frequency distribution for a group of data. In this case, the frequency distribution is simply the distribution and pattern of marks scored by the 100 students from the lowest to the highest. We can describe this central position using a number of statistics, including the mode, median, and mean. Measures of spread: these are ways of summarizing a group of data by describing how spread out the scores are. For example, the mean score of our 100 students may be 65 out of 100. However, not all students will have scored 65 marks. Rather, their scores will be spread out. Some will be lower and others higher. Measures of spread help us to summarize how spread out these scores are. To describe this spread, a number of statistics are available to us, including the range, quartiles, absolute deviation, variance and standard deviation.

When we use descriptive statistics it is useful to summarize our group of data using a combination of tabulated description (i.e., tables), graphical description (i.e., graphs and charts) and statistical commentary (i.e., a discussion of the results).

https://statistics.laerd.com/statistical-guides/descriptive-inferential-statistics.php

Charlie Chong/ Fion Zhang


Part VC

Inferential Statistics We have seen that descriptive statistics provide information about our immediate group of data. For example, we could calculate the mean and standard deviation of the exam marks for the 100 students and this could provide valuable information about this group of 100 students. Any group of data like this, which includes all the data you are interested in, is called a population. A population can be small or large, as long as it includes all the data you are interested in. For example, if you were only interested in the exam marks of 100 students, the 100 students would represent your population. Descriptive statistics are applied to populations, and the properties of populations, like the mean or standard deviation, are called parameters as they represent the whole population (i.e., everybody you are interested in). Often, however, you do not have access to the whole population you are interested in investigating, but only a limited number of data instead. For example, you might be interested in the exam marks of all students in the UK. It is not feasible to measure all exam marks of all students in the whole of the UK so you have to measure a smaller sample of students (e.g., 100 students), which are used to represent the larger population of all UK students. Properties of samples, such as the mean or standard deviation, are not called parameters, but statistics. Inferential statistics are techniques that allow us to use these samples to make generalizations about the populations from which the samples were drawn. It is, therefore, important that the sample accurately represents the population. The process of achieving this is called sampling (sampling strategies are discussed in detail here on our sister site). Inferential statistics arise out of the fact that sampling naturally incurs sampling error and thus a sample is not expected to perfectly represent the population. The methods of inferential statistics are (1) the estimation of parameter(s) and (2) testing of statistical hypotheses. Keywords: • Parameters • Statistics

https://statistics.laerd.com/statistical-guides/descriptive-inferential-statistics.php

Charlie Chong/ Fion Zhang


Part VC1

1. Measures of Central Tendency ―Most frequency distributions exhibit a ‗central tendency,‘ i.e., a shape such that the bulk of the observations pile up in the area between the two extremes. Central tendency is one of the most fundamental concepts in all statistical analysis. There are three principal measures of central tendency: mean, median, and mode.‖

Mean The mean, arithmetic mean, or mean value is the sum total of all data values divided by the number of data values. It is the average of the total of the sample values. Mean is the most commonly used measure of central tendency and is the only such measure that includes every value in the data set. The arithmetic mean is used for symmetrical or near-symmetrical distributions, or for distributions that lack a single, clearly dominant peak. Median The median is the middle value (midpoint) of a data set arranged in either ascending or descending numerical order. The median is used for reducing the effects of extreme values or for data that can be ranked but are not economically measurable, such as shades of colors, odors, or appearances. Mode The mode is the value or number that occurs most frequently in a data set. If all the values are different, no mode exists. If two values have the highest and same frequency of occurrence, then the data set or distribution has two modes and is referred to as bimodal. The mode is used for severely skewed distributions, for describing an irregular situation when two peaks are found, or for eliminating the observed effects of extreme values.

Charlie Chong/ Fion Zhang


Part VC1

Mean/ Median/ Mode The "mode" is the value that occurs most often. If no number in the list is repeated, then there is no mode for the list.

The "median" is the "middle" value in the list of numbers. To find the median, your numbers have to be listed in numerical order from smallest to largest, so you may have to rewrite your list before you can find the median. Example:

Most Often

Middle or mid value of recorded in ascending or descending order

Data set- 3, 18, 13, 14, 13, 16, 14, 21, 13 rewrite ascending order: 13, 13, 13, 13, 14, 14, 16, 18, 21. So the median is 14.

The "mean" is the "average" you're used to, where you add up all the numbers and then divide by the number of numbers.

Average

Charlie Chong/ Fion Zhang


Part VC1

Mean/ Median/ Mode Mean The mean, arithmetic mean, or mean value is the sum total of all data values divided by the number of data values. It is the average of the total of the sample values. Mean is the most commonly used measure of central tendency and is the only such measure that includes every value in the data set. The arithmetic mean is used for symmetrical or nearsymmetrical distributions, or for distributions that lack a single, clearly dominant peak. Median The median is the middle value (midpoint) of a data set arranged in either ascending or descending numerical order. The median is used for reducing the effects of extreme values or for data that can be ranked but are not economically measurable, such as shades of colors, odors, or appearances. Mode The mode is the value or number that occurs most frequently in a data set. If all the values are different, no mode exists. If two values have the highest and same frequency of occurrence, then the data set or distribution has two modes and is referred to as bimodal. The mode is used for severely skewed distributions, for describing an irregular situation when two peaks are found, or for eliminating the observed effects of extreme values.

Charlie Chong/ Fion Zhang


Part VC1

Mode Bimodal The mode is the value or number that occurs most frequently in a data set. If all the values are different, no mode exists. If two values have the highest and same frequency of occurrence, then the data set or distribution has two modes and is referred to as bimodal. The mode is used for severely skewed distributions, for describing an irregular situation when two peaks are found, or for eliminating the observed effects of extreme values.

http://polymerprocessing.blogspot.com/2008/09/bimodal-high-density-polyethylene-hdpe.html

Charlie Chong/ Fion Zhang


Part VC1

Mean/ Median/ Mode Sample Right-Skewed and Left-Skewed Frequency Distributions (a) This is an example of a right-skewed frequency distribution in which the tail of the distribution goes off to the right. In a right-skewed distribution, the mean is greater than the median because the unusually high scores distort it. (b) This is an example of a left-skewed frequency distribution in which the tail of the distribution goes off to the left. The mean is less than the median because the unusually low scores distort it.

Median

Right/ Positive Skew

Median

Left/ Negative Skew

http://www.macmillanhighered.com/BrainHoney/Resource/22292/digital_first_content/trunk/test/griggs4e/asset/ch01/c01_fig05.html

Charlie Chong/ Fion Zhang


Part VC1

Mean/ Median/ Mode Find the mean, median, mode, and range for the following list of values: 13, 18, 13, 14, 13, 16, 14, 21, 13 The mean is the usual average, so I'll add and then divide: (13 + 18 + 13 + 14 + 13 + 16 + 14 + 21 + 13) ÷ 9 = 15 Note that the mean, in this case, isn't a value from the original list. This is a common result. You should not assume that your mean will be one of your original numbers. The median is the middle value, so first I'll have to rewrite the list in numerical order: 13, 13, 13, 13, 14, 14, 16, 18, 21 There are nine numbers in the list, so the middle one will be the (9 + 1) ÷ 2 = 10 ÷ 2 = 5th number: 13, 13, 13, 13, 14, 14, 16, 18, 21 So the median is 14. The mode is the number that is repeated more often than any other, so 13 is the mode. The range: largest value in the list is 21, and the smallest is 13, so the range is 21 – 13 = 8. mean: 15 median: 14 mode: 13 range: 8

.

http://polymerprocessing.blogspot.com/2008/09/bimodal-high-density-polyethylene-hdpe.html

Charlie Chong/ Fion Zhang


Part VC2

2. Measures Of Dispersion Dispersion is the variation in the spread of data about the mean. Dispersion is also referred to as variation, spread, and scatter. A measure of dispersion is the second of the two most fundamental measures of all statistical analyses. The dispersion within a central tendency is normally measured by one or more of several measuring principles. Data are always scattered around the zone of central tendency, and the extent of this scatter is called dispersion or variation. There are several measures of dispersion:  range,  standard deviation, and  coefficient of variation.

Range The range is the simplest measure of dispersion. It is the difference between the maximum and minimum values in an observed data set. Since it is based on only two values from a data set, the measurement of range is most useful when the number of observations or values is small (10 or fewer).

Charlie Chong/ Fion Zhang


Part VC2

Standard Deviation Standard deviation, the most important measure of variation, measures the extent of dispersion around the zone of central tendency. For samples from a normal distribution, it is defined as the resulting value of the square root of the sum of the squares of the observed values, minus the arithmetic mean (numerator), divided by the total number of observations, minus one (denominator). The standard deviation of a sample of data is given as:

σ=

n i=1

X−μ n

2

or s =

n i=1

Xi− X

2

n−1

s = sample standard deviation σ = population standard deviation n = number of samples (observations or data points) X = value measured X = average value measured μ = population mean

MIT Galant Σ(Sigma) 1976

Charlie Chong/ Fion Zhang


Part VC2

Standard Deviation In statistics, the standard deviation (SD, also represented by the Greek letter sigma Ďƒ or the Latin letter s) is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A low standard deviation indicates that the data points tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values. X

Ďƒ=

đ?‘› đ?‘–=1

2

đ?‘›

X

s=

đ?‘‹iâˆ’Îź

đ?‘› đ?‘–=1

đ?‘‹i− đ?‘‹ đ?‘›âˆ’1

2

In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel. In estimating the population variance from a sample when the population mean is unknown, the uncorrected sample variance is the mean of the squares of deviations of sample values from the sample mean (i.e. using a multiplicative factor 1/n). In this case, the sample variance is a biased estimator of the population variance. https://en.wikipedia.org/wiki/Bessel%27s_correction

The (sample) standard deviation s of a random variable, statistical population, data set, or probability distribution is 1 đ?‘› the square root of its (sample) variance s = v , v = đ?‘Ľ − đ?‘Ľ 2 It is algebraically simpler, though in practice đ?‘›âˆ’1 đ?‘–=1 đ?‘– less robust, than the average absolute deviation. A useful property of the standard deviation is that, unlike the variance, it is expressed in the same units as the data. In addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. This derivation of a standard deviation is often called the "standard error" of the estimate or "standard error of the mean" when referring to a mean. It is computed as the standard deviation of all the means that would be computed from that population if an infinite number of samples were drawn and a mean for each sample were computed.

https://en.wikipedia.org/wiki/Standard_deviation

Charlie Chong/ Fion Zhang


Part VC2

It is very important to note that the standard deviation of a population and the standard error of a statistic derived from that population (such as the mean) are quite different but related (related by the inverse of the square root of the number of observations). The reported margin of error of a poll is computed from the standard error of the mean (or alternatively from the product of the standard deviation of the population and the inverse of the square root of the sample size, which is the same thing) and is typically about twice the standard deviation—the half-width of a 95 percent confidence interval. In science, many researchers report the standard deviation of experimental data, and only effects that fall much farther than two standard deviations away from what would have been expected are considered statistically significant—normal random error or variation in the measurements is in this way distinguished from likely genuine effects or associations. The standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment. When only a sample of data from a population is available, the term standard deviation of the sample or sample standard deviation can refer to either the above-mentioned quantity as applied to those data or to a modified quantity that is an unbiased estimate of the population standard deviation (the standard deviation of the entire population).

https://en.wikipedia.org/wiki/Standard_deviation

Charlie Chong/ Fion Zhang


Part VC2

Population Standard Deviation, Ďƒ Example: You and your friends have just measured the heights of your dogs (in millimeters):

The heights (at the shoulders) are: 600mm, 470mm, 170mm, 430mm and 300mm. Find out the Mean, the Variance, and the Standard Deviation. Your first step is to find the Mean: Mean, Îź

=

(600 + 470 + 170 + 430 + 300) /5

=

1970/5

=

394mm

so the mean (average) height is 394 mm. Let's plot this on the chart:

https://www.mathsisfun.com/data/standard-deviation.html

Charlie Chong/ Fion Zhang


Part VC2

so the Îź mean (average) height is 394 mm. Let's plot this on the chart:

Îź

To calculate the Variance, take each difference, square it, and then average the result: Ďƒ2

= =

1 đ?‘› 1 5

đ?‘›

đ?‘Ľđ?‘– − đ?‘Ľ

2

đ?‘–=1

(2062 + 762 + (−224)2 + 362 + (−94)2 )

=

42436 + 5776 + 50176 + 1296 + 8836 / 5

=

108520/ 5

=

21704

So the Variance is 21,704# And the Standard Deviation is just the square root of Variance, so: Ďƒ

=

âˆšĎƒ2

=

√21704

=

147.32...

=

147 (to the nearest mm)

https://www.mathsisfun.com/data/standard-deviation.html

Charlie Chong/ Fion Zhang


Part VC2

And the good thing about the Standard Deviation is that it is useful. Now we can show which heights are within one Standard Deviation (147mm) of the Mean:

So, using the Standard Deviation we have a "standard" way of knowing what is normal, and what is extra large or extra small. Rottweilers are tall dogs. And Dachshunds are a bit short ... but don't tell them!

https://www.mathsisfun.com/data/standard-deviation.html

Charlie Chong/ Fion Zhang


Part VC2

But ... there is a small change with Sample Data Our example has been for a Population (the 5 dogs are the only dogs we are interested in). But if the data is a Sample (a selection taken from a bigger Population), then the calculation changes! When you have "N" data values that are: The Population: A Sample:

divide by N when calculating Variance (like we did) divide by N-1 when calculating Variance

All other calculations stay the same, including how we calculated the mean. Example: if our 5 dogs are just a sample of a bigger population of dogs, we divide by 4 instead of 5 like this: Sample Variance = 108,520 / 4 = 27,130 Sample Standard Deviation = √27,130 = 164 (to the nearest mm) Population

Ďƒ = 147

Sample

S = 164

Think of it as a "correction" when your data is only a sample.

https://www.mathsisfun.com/data/standard-deviation.html

Charlie Chong/ Fion Zhang


Part VC2

Formulas Here are the two formulas, explained at Standard Deviation Formulas if you want to know more:

The "Population Standard Deviation":

The "Sample Standard Deviation":

https://www.mathsisfun.com/data/standard-deviation.html

Charlie Chong/ Fion Zhang


Part VC2 https://www.mathsisfun.com/data/standard-deviation-calculator.html

Charlie Chong/ Fion Zhang


Part VC2

Coefficient of Variation The final measure of dispersion, coefficient of variation is the standard deviation divided by the mean. Variance is the guaranteed existence of a difference between any two items or observations. The concept of variation states that no two observed items will ever be identical.

Keywords: coefficient of variation is the standard deviation divided by the mean

Charlie Chong/ Fion Zhang


Part VC2 http://www.who.int/ihr/training/laboratory_quality/quantitative/en/

Charlie Chong/ Fion Zhang


Part VC2

Coefficient of Variation In probability theory and statistics, the coefficient of variation (CV or cv), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation Ďƒ to the mean Îź (or its Ďƒ absolute value, Îź ) ( x100). The CV or RSD is widely used in analytical chemistry to express the precision Îź and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R.(ANOVA-Analysis of variance). In addition, CV is utilized by economists and investors in economic models and in determining the volatility of a security ANOVA gauge repeatability and reproducibility is a measurement systems analysis technique that uses an analysis of variance (ANOVA) random effects model to assess a measurement system.

Ďƒ It Îź shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on a ratio scale, as these are the measurements that allow the division operation. The coefficient of variation may not have any meaning for data on an interval scale.[2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the coefficient of variation would be different depending on which scale you used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. While the standard deviation (SD) can be meaningfully derived using Kelvin, Celsius, or Fahrenheit, the CV is only valid as a measure of relative variability for the Kelvin scale because its computation involves division. The coefficient of variation (CV) is defined as the ratio of the standard deviation Ďƒ to the mean Îź, cv=

Measurements that are log-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements. A more robust possibility is the quartile coefficient of dispersion, half the interquartile range the average of the quartiles (the midhinge),

(đ?‘„1+đ?‘„3) 2

(đ?‘„3−đ?‘„1) 2

divided by

.

https://en.wikipedia.org/wiki/Coefficient_of_variation

Charlie Chong/ Fion Zhang


Part VC2

Examples 1.

A data set of [100, 100, 100] has constant values. Its standard deviation is 0 and average is 100, giving the coefficient of variation as 0 / 100 = 0%

2.

A data set of [90, 100, 110] has more variability. Its standard deviation is 8.16 and its average is 100, giving the coefficient of variation as 8.16 / 100 = 8.16%

3.

A data set of [1, 5, 6, 8, 10, 40, 65, 88] has still more variability. Its standard deviation is 32.9 and its average is 27.8, giving a coefficient of variation of 32.9 / 27.8 = 118%

Comparison to standard deviation ď Ž Advantages The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation. ď Ž Disadvantages When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale. Unlike the standard deviation, it cannot be used directly to construct confidence intervals for the mean. CVs are not an ideal index of the certainty of a measurement when the number of replicates varies across samples because CV is invariant to the number of replicates while certainty of the mean improves with increasing replicates. In this case standard error in percent is suggested to be superior.

https://en.wikipedia.org/wiki/Coefficient_of_variation

Charlie Chong/ Fion Zhang


Part VC2

Data Types

σ It shows μ the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on a ratio scale, as these are the measurements that allow the division operation. The coefficient of variation may not have any meaning for data on an interval scale.[2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the coefficient of variation would be different depending on which scale you used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. The coefficient of variation (CV) is defined as the ratio of the standard deviation σ to the mean μ, cv=

https://en.wikipedia.org/wiki/Coefficient_of_variation

Charlie Chong/ Fion Zhang


Part VC2

Disadvantages When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale. Unlike the standard deviation, it cannot be used directly to construct confidence intervals for the mean. CVs are not an ideal index of the certainty of a measurement when the number of replicates varies across samples because CV is invariant to the number of replicates while certainty of the mean improves with increasing replicates. In this case standard error in percent is suggested to be superior.

Singularity

Event Horizon

https://sacredgeometryinternational.com/the-meaning-of-sacred-geometry-ii-whats-the-point/

Charlie Chong/ Fion Zhang


Part VC2

Frequency distributions A frequency distribution is a tool for presenting data in a form that clearly demonstrates the relative frequency of the occurrence of values as well as the central tendency and dispersion of the data. Raw data are divided into classes to determine the number of values in a class or class frequency. The data are arranged by classes, with the corresponding frequencies in a table called a frequency distribution. When organized in this manner, the data are referred to as grouped data, as in Table 20.1. The data in this table appear to be normally distributed. Even without constructing a histogram or calculating the average, the values appear to be centered around the value 18. In fact, the arithmetic average of these values is 18.02. The histogram in Figure 20.1 provides a graphic illustration of the dispersion of the data. This histogram may be used to compare the distribution of the data with specification limits in order to determine where the process is centered in relation to the specification tolerances. Frequency distributions are useful to auditors for evaluating process performance and presenting the evidence of their analysis. Not only is a histogram a simple tool to use, it is also an effective method of illustrating process results.

Charlie Chong/ Fion Zhang


Part VC2

Table 20.1 Frequency distribution.

Charlie Chong/ Fion Zhang


Part VC2

Figure 20.1 Histogram data dispersion.

Charlie Chong/ Fion Zhang


Part VC3

3. Qualitative And Quantitative Analysis Types o f Data During an audit, an auditor must analyze many different types of information to determine its acceptability with the overall audit scope and the characteristics, goals, and objectives of the product, process, or system being evaluated. This information may be documented or undocumented and includes procedures, drawings, work instructions, manuals, training records, electronic data on a computer disk, observation, and interview results. The auditor must determine if the information is relevant to the audit purpose and scope. Quantitative Data Quantitative data means either that measurements were taken or that a count was made, such as counting the number of defective pieces removed (inspected out), the number of customer complaints, or the number of cycles of a molding press observed during a time period. In short, the data are expressed as a measurement or an amount. IIA‘s Internal Auditing: Principles and Techniques suggests that there are many sources of quantitative data, such as:

      

Test reports Product scrap rates Trend analyses Histograms Regression analyses Ratio analyses Lost-time accidents

       

Frequency distributions Chi square tests Risk analyses Variance analyses Budget comparisons Mean, mode, median Profitability Cost/benefit studies

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests A chi-squared test, also written as χ2 test, is any statistical hypothesis test where the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. Without other qualification, 'chi-squared test' often is used as short for Pearson's chi-squared test. The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. In the standard applications of the test, the observations are classified into mutually exclusive classes, and there is some theory, or say null hypothesis, which gives the probability that any observation falls into the corresponding class. The purpose of the test is to evaluate how likely the observations that are made would be, assuming the null hypothesis is true. Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem. A chi-squared test can be used to attempt rejection of the null hypothesis that the data are independent. Also considered a chi-squared test is a test in which this is asymptotically true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-squared distribution as closely as desired by making the sample size large enough.

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://youtu.be/qYOMO83Z1WU

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://youtu.be/qYOMO83Z1WU

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://www.di-mgt.com.au/chisquare-calculator.html

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://youtu.be/2cibIAU6jkg

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://youtu.be/2cibIAU6jkg

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://www.di-mgt.com.au/chisquare-calculator.html

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://www.youtube.com/watch?v=2QeDRsxSF9M&feature=youtu.be

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://www.youtube.com/watch?v=2QeDRsxSF9M&feature=youtu.be

Charlie Chong/ Fion Zhang


Part VC3

Chi square tests

https://www.di-mgt.com.au/chisquare-calculator.html

Charlie Chong/ Fion Zhang


Part VC3

Chi Square Tests What are degrees of freedom? Degrees of freedom can be described as the number of scores that are free to vary. For example, suppose you tossed three dice. The total score adds up to 12. If you rolled a 3 on the first die and a 5 on the second, then you know that the third die must be a 4 (otherwise, the total would not add up to 12). In this example, 2 die are free to vary while the third is not. Therefore, there are 2 degrees of freedom. In many situations, the degrees of freedom are equal to the number of observations minus one. Thus, if the sample size were 20, there would be 20 observations; the degrees of freedom would be 20 minus 1 or 19. What is a chi-square critical value? The chi-square critical value can be any number between zero and plus infinity. The chi-square calculator computes the probability that a chi-square statistic falls between 0 and the critical value. Suppose you randomly select a sample of 10 observations from a large population. In this example, the degrees of freedom (DF) would be 9, since DF = n - 1 = 10 - 1 = 9. Suppose you wanted to find the probability that a chi-square statistic falls between 0 and 13. In the chi-square calculator, you would enter 9 for degrees of freedom and 13 for the critical value. Then, after you click the Calculate button, the calculator would show the cumulative probability to be 0.84. What is a cumulative probability? (≤ CV) A cumulative probability is a sum of probabilities. The chi-square calculator computes a cumulative probability. Specifically, it computes the probability that a chi-square statistic falls between 0 and some critical value (CV). With respect to notation, the cumulative probability that a chi-square statistic falls between 0 and CV is indicated by P(Χ2 < CV).

https://www.stattrek.com/online-calculator/chi-square.aspx

Charlie Chong/ Fion Zhang


Part VC3

Qualitative Data In contrast, qualitative data refers to the nature, kind, or attribute of an observation. Qualitative data may include single observations or data points, such as in the following examples: last month‘s withholding tax deposit was three days late; the paycheck amount was wrong; the injection needle was contaminated; the wrong reference standard was used; the purchase order specification gave the wrong activity level; computer equipment was missing from the clerk‘s office; or a regulatory violation was reported. Whether the evidence is qualitative or quantitative, it should be objective, unbiased, and proven true. The auditor must analyze the data to determine relevancy. Some data are important and should be reported due to frequency or level. Other data are important due to the nature or kind of information even though an event occurred only once. With quantitative information, the determination of acceptability is fairly straightforward for two reasons. First, a direct comparison can be made between the information and the requirements or criteria for the audit. For instance, suppose the measure of system effectiveness used in an audit is found to have less than a predetermined number of customer complaints about product quality in a three- month period. Analysis would consist of comparing customer complaint records against the criteria to determine whether the system is effective. Second, most quantitative information is considered reliable because by nature it should be free of emotion and bias. Qualitative data must be unbiased and traceable, just like any observation that is used as objective evidence by the auditor. Additionally, the auditor should determine the usefulness or relevance of the information. For instance, the auditor may be informed that one customer complaint turned into a $10 million lawsuit. In this case, the data must be verified, and the auditor will seek to determine whether the data have any bearing on the management system. Once the information has been determined to have a real effect on the system, the auditor may use the data to draw conclusions about system effectiveness. Or the auditor may determine that the data represented a once- in-a-lifetime event and are not relevant to current operations.

Charlie Chong/ Fion Zhang


Part VC3

Patterns and Trends Pattern analysis involves the collection of data in a way that readily reveals any kind of clustering that may occur. This technique is of major value in internal audits, since it is so effective in making use of data from repetitive audits. It can be both location- and time- sensitive. Pattern analysis is of limited value in external audits owing to the lack of repetition in such audits.4 While no one specific tool exists to determine patterns and trends, the following tools, matrices, and data systems are among the many tools that can help make such determinations. Patterns and trends can often indicate the severity of a problem and can be used to help determine whether a problem is a systemic issue. Line/Trend graphs connect points that represent pairs of numeric data, to show how one variable of the pair is a function of the other. As a matter of convention, independent variables are plotted on the horizontal axis, and dependent variables are plotted on the vertical axis. Line graphs are used to show changes in data over time (see Figure 20.2). A trend is indicated when a series of points increases or decreases. Nonrandom patterns indicate a trend or tendency. (Experience is required for proper interpretation.) Pareto charts and scatter diagrams are used as necessary.

Charlie Chong/ Fion Zhang


Part VC3

Following are characteristics of trend analysis: • Allows us to describe the historical pattern in the data • Permits us to project the past patterns and/or trends in the future • Helps us understand the long- term variation of the time series

Bar graphs also portray the relationship or comparison between pairs of variables, but one of the variables need not be numeric. Each bar in a bar graph represents a separate, or discrete, value. Bar graphs can be used to identify differences between sets of data (see Figure 20.3). Pie charts are used to depict proportions of data or information in order to understand how they make up the whole. The entire circle, or ―pie,‖ represents 100% of the data. The circle is divided into ―slices,‖ with each segment proportional to the numeric quantity in each class or category (see Figure 20.4). Matrices are twodimensional tables showing the relationship between two sets of information. They can be used to show the logical connecting points between performance criteria and implementing actions, or between required actions and personnel responsible for those actions. In this way, matrices can determine what actions and/or personnel have the greatest impact on an organization‘s mission. Auditors can use matrices as a way to focus auditing time and to organize the audit. In Table 20.2, the matrix helps the auditor by identifying organizational responsibilities for the different audit areas. This particular matrix is used to maximize use of time during the site visit.

Charlie Chong/ Fion Zhang


Part VC3

Figure 20.2 Line graph.

Charlie Chong/ Fion Zhang


Part VC3

Figure 20.3 Bar graph.

Charlie Chong/ Fion Zhang


Part VC3

Figure 20.4 Pie chart.

Charlie Chong/ Fion Zhang


Part VC3

Table 20.3, a much broader matrix, allows the auditor to do the long- range planning necessary for ensuring proper application of the audit program. In this example, the various audited areas (y axis) are applied against the different organizations to be audited. Data systems exist in a wide range of forms and formats. They may include the weekly and monthly reports of laboratory or organizational performance that are used to alert the auditing organization of potential audit areas, or computerized databases that link performance to specific performance objectives or track actions to resolve programmatic weaknesses. In any case, data systems are important tools that provide the auditor with the data needed to focus on audit activities. In Table 20.4 information on lost- time injuries is displayed in tabular form; the same information is displayed as a graph in Figure 20.5. This information can be used to focus the assessment on either the location of the injuries or the work procedures involved to identify any weaknesses in the accident prevention program.

Charlie Chong/ Fion Zhang


Part VC3

Table 20.2 Area of responsibilities matrix.

Charlie Chong/ Fion Zhang


Part VC3

Table 20.3 Audit planning matrix.

Charlie Chong/ Fion Zhang


Part VC3

Table 20.4 Lost-time accident monthly summary.

Charlie Chong/ Fion Zhang


Part VC3

Figure 20.5 Lost work this month.

Charlie Chong/ Fion Zhang


Part V

Part V Quality Tools and Techniques [26 of the CQA Exam Questions or 17.3 percent] ________________________________________________ Chapter 18 Basic Quality and Problem- Solving Tools/Part VA Chapter 19 Process Improvement Techniques/Part VB Chapter 20 Basic Statistics/Part VC Chapter 21 Process Variation/Part VD Chapter 22 Sampling Methods/Part VE Chapter 23 Change Control and Configuration Management/Part VF Chapter 24 Verification and Validation/Part VG Chapter 25 Risk Management Tools/Part VH

Charlie Chong/ Fion Zhang


Charlie Chong/ Fion Charlie Chong/ Fion Zhang

Zhang


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.