Six Sigma

Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects. A defect is defined as nonconformity of a product or service to its specifications.

While the particulars of the methodology were originally formulated by Bill Smith at Motorola in 1986, Six Sigma was heavily inspired by six preceding decades of quality improvement methodologies such as quality control, TQM, and Zero Defects. Like its predecessors, Six Sigma asserts the following:


 * Continuous efforts to reduce variation in process outputs is key to business success
 * Manufacturing and business processes can be measured, analyzed, improved and controlled
 * Succeeding at achieving sustained quality improvement requires commitment from the entire organization, particularly from top-level management

The term "Six Sigma" refers to the ability of highly capable processes to produce output within specification. In particular, processes that operate with six sigma quality produce at defect levels below 3.4 defects per (one) million opportunities (DPMO). Six Sigma's implicit goal is to improve all processes to that level of quality or better.

Six Sigma is a registered service mark and trademark of Motorola, Inc. Motorola has reported over US$17 billion in savings from Six Sigma as of 2006.

In addition to Motorola, companies that also adopted Six Sigma methodologies early-on and continue to practice it today include Bank of America, Caterpillar, Honeywell International (previously known as Allied Signal), Raytheon, Merrill Lynch and General Electric (introduced by Jack Welch).

There have been a few retail companies that have attempted to adapt this methodology to their business with mixed success. Perhaps the most notable was former CEO Bob Nardelli's attempt to adapt his systems from his former employer, General Electric, to the retail industry. There is one inherent problem with attempting to apply Six Sigma to retail. Retail=people, Six Sigma=defects. So, you have to look at your lacking areas as defects by your employees. Home Depot attempted to solve this by thinning out their workforce and implementing training programs for the remaining employees in order to reduce defects. On paper, this may work well but once the human factor was applied it led to massive frustration from the employees and the customers due to the lack of salespeople on the floor at any one time. Although the employees were better trained, they were now required to help 22.8 customers per hour rather than the previous 13.4. Other retailers are learning from these mistakes of the first big box retailers to attempt this and are tweaking the methodology to better suit their company goals.

Recently some practitioners have used the TRIZ methodology for problem solving and product design as part of a Six sigma approach.

Methodology
Six Sigma has two key methodologies: DMAIC and DMADV, both inspired by W. Edwards Deming's Plan-Do-Check-Act Cycle:  DMAIC is used to improve an existing business process, and DMADV is used to create new product or process designs for predictable, defect-free performance.

DMAIC
Basic methodology consists of the following five steps:
 * Define the process improvement goals that are consistent with customer demands and enterprise strategy.
 * Measure the current process and collect relevant data for future comparison.
 * Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all factors have been considered.
 * Improve or optimize the process based upon the analysis using techniques like Design of Experiments.
 * Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms.

DMADV
Basic methodology consists of the following five steps:
 * Define the goals of the design activity that are consistent with customer demands and enterprise strategy.
 * Measure and identify CTQs (critical to qualities), product capabilities, production process capability, and risk assessments.
 * Analyze to develop and design alternatives, create high-level design and evaluate design capability to select the best design.
 * Design details, optimize the design, and plan for design verification. This phase may require simulations.
 * Verify the design, set up pilot runs, implement production process and handover to process owners.

Some people have used DMAICR (Realize). Others contend that focusing on the financial gains realized through Six Sigma is counter-productive and that said financial gains are simply byproducts of a good process improvement.

Other Design for Six Sigma methodologies
Six Sigma as applied to product and process design has spawned an alphabet soup of alternatives to DMADV. Notable examples include:

Statistics and robustness
The core of the Six Sigma methodology is a data-driven, systematic approach to problem solving, with a focus on customer impact. Statistical tools and analysis are often useful in the process. However, it is a mistake to view the core of the Six Sigma methodology as statistics; an acceptable Six Sigma project can be started with only rudimentary statistical tools.

Still, some professional statisticians criticize Six Sigma because practitioners have highly varied levels of understanding of the statistics involved.

Six Sigma as a problem-solving approach has traditionally been used in fields such as business, engineering, and production processes.

Implementation roles
One of the key innovations of Six Sigma is the professionalizing of quality management functions. Prior to Six Sigma, Quality Management in practice was largely relegated to the production floor and to statisticians in a separate quality department. Six Sigma borrows martial arts ranking terminology to define a hierarchy (and career path) that cuts across all business functions and a promotion path straight into the executive suite.

Six Sigma identifies several key roles for its successful implementation.
 * Executive Leadership includes CEO and other key top management team members. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements.
 * Champions are responsible for the Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from the upper management. Champions also act as mentors to Black Belts. At GE this level of certification is now called "Quality Leader".
 * Master Black Belts, identified by champions, act as in-house expert coaches for the organization on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from the usual rigor of statistics, their time is spent on ensuring integrated deployment of Six Sigma across various functions and departments.
 * Experts This level of skill is used primarily within Aerospace and Defense Business Sectors. Experts work across company boundaries, improving services, processes, and products for their suppliers, their entire campuses, and for their customers. Raytheon Incorporated was one of the first companies to introduce Experts to their organizations. At Raytheon, Experts work not only across multiple sites, but across business divisions, incorporating lessons learned throughout the company.
 * Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma.
 * Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities. They operate under the guidance of Black Belts and support them in achieving the overall results.
 * Yellow Belts are employees who have been trained in Six Sigma techniques as part of a corporate-wide initiative, but have not completed a Six Sigma project and are not expected to actively engage in quality improvement activities.

In many recent programs, Green Belts and Black Belts are empowered to initiate, expand, and lead projects in their area of responsibility. The roles as defined above, therefore, conform to the older Mikel Harry/Richard Schroeder model, which is not universally accepted.

Origin
Bill Smith did not really "invent" Six Sigma in the 1980s; rather, he applied methodologies that had been available since the 1920s developed by luminaries like Shewhart, Deming, Juran, Ishikawa, Ohno, Shingo, Taguchi and Shainin. All tools used in Six Sigma programs are actually a subset of the Quality Engineering discipline and can be considered a part of the ASQ Certified Quality Engineer body of knowledge. The goal of Six Sigma, then, is to use the old tools in concert, for a greater effect than a sum-of-parts approach.

The use of "Black Belts" as itinerant change agents is controversial as it has created a cottage industry of training and certification. This relieves management of accountability for change; pre-Six Sigma implementations, exemplified by the Toyota Production System and Japan's industrial ascension, simply used the technical talent at hand—Design, Manufacturing and Quality Engineers, Toolmakers, Maintenance and Production workers—to optimize the processes.

The expansion of the various "Belts" to include "Green Belt", "Master Black Belt" and "Gold Belt" is commonly seen as a parallel to the various "Belt Factories" that exist in martial arts.

The term Six Sigma
Sigma (the lower-case Greek letter σ) is used to represent standard deviation (a measure of variation) of a population (lower-case 's', is an estimate, based on a sample). The term "six sigma process" comes from the notion that if one has six standard deviations between the mean of a process and the nearest specification limit, there will be practically no items that fail to meet the specifications. This is the basis of the Process Capability Study, often used by quality professionals. The term "Six Sigma" has its roots in this tool, rather than in simple process standard deviation, which is also measured in sigmas. Criticism of the tool itself, and the way that the term was derived from the tool, often sparks criticism of Six Sigma.

The widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO). A process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). This implies that 3.4 DPMO corresponds to 4.5 sigmas, not six as the process name would imply. This can be confirmed by running on QuikSigma or Minitab a Capability Study on data with a mean of 0, a standard deviation of 1, and an upper specification limit of 4.5. The 1.5 sigmas added to the name Six Sigma are arbitrary and they are called "1.5 sigma shift" (SBTI Black Belt material, ca 1998). Dr. Donald Wheeler dismisses the 1.5 sigma shift as "goofy".

In a Capability Study, sigma refers to the number of standard deviations between the process mean and the nearest specification limit, rather than the standard deviation of the process, which is also measured in "sigmas". As process standard deviation goes up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Cpk Index). The notion that, in the long term, processes usually do not perform as well as they do in the short term is correct. That requires that Process Capability sigma based on long term data is less than or equal to an estimate based on short term sigma. However, the original use of the 1.5 sigma shift is as shown above, and implicitly assumes the opposite. As sample size increases, the error in the estimate of standard deviation converges much more slowly than the estimate of the mean (see confidence interval). Even with a few dozen samples, the estimate of standard deviation often drags an alarming amount of uncertainty into the Capability Study calculations. It follows that estimates of defect rates can be very greatly influenced by uncertainty in the estimate of standard deviation, and that the defective parts per million estimates produced by Capability Studies often ought not to be taken too literally.

Estimates for the number of defective parts per million produced also depends on knowing something about the shape of the distribution from which the samples are drawn. Unfortunately, there are no means for proving that data belong to any particular distribution. One can only assume normality, based on finding no evidence to the contrary. Estimating defective parts per million down into the 100s or 10s of units based on such an assumption is wishful thinking, since actual defects are often deviations from normality, which have been assumed not to exist.

The ±1.5 Sigma Drift
The ±1.5σ drift is the drift of a process mean, which is assumed to occur in all processes. If a product is manufactured to a target of 100 mm using a process capable of delivering σ = 1 mm performance, over time a ±1.5σ drift may cause the long term process mean to range from 98.5 to 101.5 mm. This could be of significance to customers.

The ±1.5σ shift was introduced by Mikel Harry. Harry referred to a paper about tolerancing, the overall error in an assembly is affected by the errors in components, written in 1975 by Evans, "Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts". Evans refers to a paper by Bender in 1962, "Benderizing Tolerances – A Simple Practical Probability Method for Handling Tolerances for Limit Stack Ups". He looked at the classical situation with a stack of disks and how the overall error in the size of the stack, relates to errors in the individual disks. Based on "probability, approximations and experience", Bender suggests:

$$v = 1.5 \sqrt{var(X)}$$



Harry then took this a step further. Supposing that there is a process in which 5 samples are taken every half hour and plotted on a control chart, Harry considered the "instantaneous" initial 5 samples as being "short term" (Harry's n=5) and the samples throughout the day as being "long term" (Harry's g=50 points). Due to the random variation in the first 5 points, the mean of the initial sample is different from the overall mean. Harry derived a relationship between the short term and long term capability, using the equation above, to produce a capability shift or "Z shift" of 1.5. Over time, the original meaning of "short term" and "long term" has been changed to result in "long term" drifting means.

Harry has clung tenaciously to the "1.5" but over the years, its derivation has been modified. In a recent note from Harry, "We employed the value of 1.5 since no other empirical information was available at the time of reporting." In other words, 1.5 has now become an empirical rather than theoretical value. Harry further softened this by stating "... the 1.5 constant would not be needed as an approximation". Interestingly, 1.5σ is exactly one half of the commonly accepted natural tolerance limits of 3σ.

Despite this, industry is resigned to the belief that it is impossible to keep processes on target and that process means will inevitably drift by ±1.5σ. In other words, if a process has a target value of 0.0, specification limits at 6σ, and natural tolerance limits of ±3σ, over the long term the mean may drift to +1.5 (or -1.5).

In truth, any process where the mean changes by 1.5σ, or any other statistically significant amount, is not in statistical control. Such a change can often be detected by a trend on a control chart. A process that is not in control is not predictable. It may begin to produce defects, no matter where specification limits have been set.

Digital Six Sigma
In an effort to permanently minimize variation, Motorola has evolved the Six Sigma methodology to use information systems tools to make business improvements absolutely permanent. Motorola calls this effort Digital Six Sigma.

Originality
Noted Quality expert Joseph Juran has criticized Six Sigma as "a basic version of quality improvement", stating that "[t]here is nothing new there."

Studies that indicate negative effects caused by Six Sigma
A Fortune article stated that "of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since." The statement is attributed to "an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process)." The gist of the article is that Six Sigma is effective at what it is intended to do, but that it is "narrowly designed to fix an existing process" and does not help in "coming up with new products or disruptive technologies."

When Six Sigma is used as a cost cutting program, it has been shown to stifle new product innovation.

Based on arbitrary standards
While 3.4 defects per million might work well for certain products/processes, it might not be ideal for others. A pacemaker might need higher standards, for example, whereas a direct mail advertising campaign might need less. The basis and justification for choosing 6 as the number of standard deviations is not clearly explained.

Examples of some key tools used

 * 5 Whys
 * ANOVA
 * ANOVA Gage R&R
 * Axiomatic design
 * Catapult Exercise on variability
 * Cause & Effects Diagram (also known as Fishbone or Ishikawa Diagram)
 * Chi-Square Test of Independence and Fits
 * Control Charts
 * Correlation
 * Cost Benefit Analysis
 * CTQ Tree
 * SIPOC (Suppliers Inputs Process Outputs Customers) Maps


 * Customer survey
 * Design of Experiments
 * Failure Modes Effects Analysis
 * General Linear Model
 * Histograms
 * Homogeneity of Variance
 * Process Maps
 * Regression
 * Run Charts
 * Stratification
 * Taguchi
 * Thought Process Map