Introduction To Statistical Process Control

If you work in the manufacturing industry, then you will know how important it is to check for defects as early as possible. SPC, or statistical process control is a set of methods that was first devised in the 1920s. W. Edwards Deming helped to standardize the idea of SPC during the Second World War, before introducing it to Japan after America’s occupation. SPC soon became a huge part of the Six-Sigma, and by extension of this, lean manufacturing.

Why is SPC Such a Useful Tool?

SPC essentially measures the overall output of a process. it works by exploring and documenting small, yet significant changes. This allows corrections to be made before any defects occur.

SPC was used originally in manufacturing, as it is one of the best ways to significantly reduce waste due to scrap utilization. Now, however, it is used in various service industries, as well as healthcare. SPC uses various statistical methods to try and monitor the output. SPC uses design experiments as well, as it is vital that the work is carried out in two different phases. The first phase would be to make sure that the process is fit for purpose, before establishing what it should look like.

The second phase monitors the process to make sure that everything is working as it should. Determining the correct frequency is so important, as it will in part, depend on various influences and significant factors. If you want to learn more about statistical process control or if you want to learn how to use control charts in terms of the statistical process, then keep on reading this introduction to statistical process control.

Young workers at pharmaceutical production reviewing control charts

Key Concepts within Statistical Process Control

One of the key concepts within SPC is the variation within the process. This can be down to the two basic causes. If you look at Shewhart, you will soon see that these are documented quite clearly.

Assignable Causes and Chance Causes

A key concept in SPC would be assignable causes and chance causes. The basic idea is that if every process is constant, then some random variation is going to occur. This is unavoidable, but the great thing about statistical process control is that it can help us to understand it. By knowing that a process can only ever be impacted by chance, it becomes easier to calculate or predict the probability of a given part, being out of line in terms of general specification. In his introduction to statistical process control, Shewhart refers to other sources of variation, as being assignable causes.

Causes and Control Charts with Statistical Methods

These are not at all random in nature and are instead caused by identifiable events and changes.

It may be that another operator has taken over the manufacturing process, that the temperature has changed or that the materials being used in quality control have been altered. It is hard to predict what the output of any given process can be if you do not measure the assignable cause of the variation.

When you look at the modern implementation of this introduction to statistical process control guide, you will see that chance causes are referred to as being common causes. Assignable causes are called special causes.

Female worker reviewing manufacturing upon introduction to statistical process control

Concepts parallel with MSA

Concepts like this, have parallels with MSA, or measurement system analysis. Common causes can be related to precision in MSA and assignable causes can be compared to trueness or bias.

In this introduction to statistical process control, we’re going to go into the significant special cause variations, and how they can be detected and removed quickly through control chart adoption.

Explanation through a Common Cause

One of the main aims of SPC is to try and achieve a process, where all of the variations can be explained through common causes.

This gives a known probability statistic in terms of defects. Shewhart once said that when something is controlled, it can be predicted.

You can predict the probability that the observed phenomenon is going to fall within the parameters during the manufacturing process. Industrial and service processes often rely on statistical methods such as this, as well as relying on control charts to ensure that everything is working as it should be.

In modern SPC, a process is stable when the variation appears but can be pinned down to a common cause. This is often done through control charts so that an expected level of variation can occur. Real processes might have a lot of different variations, but only a few of them are truly significant when you look at the control charts. During the initial phase of SPC, special causes are first identified and then removed in an attempt to stabilize the process. The limits can then be determined, but only if another special cause does not emerge.

Again, all of this can be documented through the use of control charts and through the process that statistical process control describes.

One example of this, which again, can be uncovered through the use of control charts, would be if a process that was once stable, begins to change. This could be as tooling wears for example. The overall concept of having a stable process can only be evaluated when any sources of bias have been eliminated.

By doing this through control charts, you are then left with a measurement that can only be influenced by known random influences. This is the best way to ensure a steady control chart, a reliable manufacturing process and fundamental multivariate SPC charts.

Male workers reviewing statistical process to improve quality control

Basic Statistics

SPC is a huge subject, and it is entirely possible for it to involve some complex control charts and statistics.

Only a basic level of understanding regarding these control charts and statistics is required for you to control your manufacturing processes, however.

Some of the things you need to understand, in order to benefit quality control, would be standard deviation, statistical significance and probability distribution.

Standard Deviation

Standard deviation, in terms of control charts and quality control, is a measure of a set of values. If you have 20 parts in total at the end of a process, then you may find that each part has a very slight variation in terms of the measurement value. By importing this data into the control charts and by monitoring each control chart effectively, you can then gauge how much variation there is.

The simplest way for you to do this would be for you to first look at the smallest value and the largest value. Deduct them, and this will give you the range.

The issue here is that the more parts you check, the more range you’ll get, so it is impossible to determine the probability of general conformance with this range alone. The standard deviation would be a reliable measure, as it can be based on the assumption that the average distance for the values is taken from the mean.


When you consider dispersion, it is of no concern if the values are bigger or smaller than the mean. All that matters is how far away they are.

Squaring the difference, adding them together, and then dividing them will help you to get the mean. The probability with each score will increase from the lowest value to the middle value. It will then decrease to the largest value.

This probability distribution is called triangular distribution. If you look at control charts, or if you implement this within the control chart, then you will see that this happens when two random effects with a uniform distribution are added to give a combined effect.

When you combine random effects from the control chart, you will see that the point of the triangle eventually flattens, giving a Gaussian distribution. Normal distribution happens when lots of different effects, with different distributions, give a combined effect. This can be proven with the central limit theorem. This effect simply states that even in the incredibly complex system of the natural world, it is possible for some processes to be normal. Patient survival data analysis can also be incorporated here.

Probability Distribution

So ultimately, if we know the standard deviation of a process, it is possible to calculate the overall probability of an output given the range of values. The probability of any defect can be calculated with ease, as long as the value belongs to the distribution. If it is unlikely that the measured part could have come from the stable process then this indicates that a new, unknown special cause has emerged. This would then indicate that the process is out of control and that something has to be corrected so that the overall system can be brought back into line.

Understanding the process of control is essential if you want to maintain it properly. By taking the time to implement systems like this, it becomes very easy to document the whole process from start to finish. It also becomes easier to find any anomalies as soon as they occur, which is crucial in the world of manufacturing, as one small mistake that is not caught in time could have devastating effects on the whole production line.