Inspection F
E
A
T
U
R
E
S
Automatic Defect Classification: A Productivity Improvement Tool by Tony Esposito, IBM Corporation; Mark Burns, Scott Morell, KLA-Tencor; Eric Wang, Stanford University
Why should a semiconductor fab invest the time to review and classify defects on a wafer after the wafer has been inspected? To add the additional step of classification in an already lengthy fabrication process is contradictory to manufacturing fundamentals unless it can be proven that the additional step can positively influence final yield. The extra information about the source of defects is an obvious benefit that defect review and classification provide. However, a method for quantifying the benefit of classification is required. Traditionally, classification is done manually by a human operator after the wafer is inspected. Manual review and classification of defects offers defect source information but carries with it several less-then-ideal side effects. From a manufacturing standpoint, the extra processing step increases the total time it takes for a lot to work through the process flow. Classification requires additional employees and review tools on which to do the review and classification. From an engineering standpoint, the information fed back by classification is only useful if it is accurate and consistent. In practice, a multitude of factors impact the accuracy of classification including operator experience, state of operator consciousness, consistencies from operator to operator, cost of operator labor, the cost of review stations, and the excessive queue time lots spend waiting for review after in-line inspection. The ideal solution is to place the task of classification with an automated system that reduces or eliminates the majority of these negative side effects.
cluster tool daily monitor wafers using 64 Mb pitch). For each level, a minimum of 10 lots were used to measure the performance of ADC against a pre-defined set of metrics. ADC performance
The overall ADC performance of the five process levels (figure 1) was measured and recorded. Each of the ADC classifiers performed at or above the expected levels for the beta evaluation. Process/Level Defect Standard Wafer ADI Excursion Monitor 4 Mb Metal 1 16 Mb Metal 1 64 Mb POLY
Accuracy
Purity
Redetection
97% 80% 72% 77% 80%
100% 89% 80% 87% 80%
100% 91%* 100% 99% 98%
Figure 1. Beta performance of IMPACT/Online ADC. *The lower-than-normal redetection for the ADI monitor is due to nuisance defects. Subsequent to follow-on beta testing, the inspection recipe was modified using Segmented Auto Threshold (SAT), which reduced the nuisance defects and improved rede-
Beta evaluation of IMPACT/Online™
tection to greater than 95 percent.
In a scientific approach to this task online, IBM installed a beta version of IMPACT/Online ADC on a KLA-Tencor 2132 defect inspector at IBM Burlington in order to collect data and analyze costs. The system was trained to classify five different production levels as part of the beta tool evaluation. The levels included: Trench Isolation, two metal levels, POLY on 64 Mb DRAM, and After-Develop Inspection or ADI (single layer
Case study: ADI excursion monitor
At the time of this study, the production classification strategy for the ADI Excursion Monitor was in transition from manual review and classification, to online automated defect classification using IMPACT. Therefore, the data collected for this study includes classification data from both the operators and ADC.
Autumn 1998
Yield Management Solutions
29
E
A
T
U
R
E
S
Data collection procedure
For the purposes of this study, data and images from ADI wafers were chosen randomly across a two-week time period. The accuracy and purity performance (figure 2) was calculated for the operators and ADC with the expert classifications as the basis for comparison. The ADC classifications are more in line with those of the expert than the classifications generated by the five operators. This difference is indicated by the 20+percent difference in accuracy and purity.
ADC
0.73
Operator
Purity Accuracy
0.85
0.62
0.49
impact of physical defects on die yield. The ADI defect types were consolidated into two groups (figure 4): a killer group which consists of defect types with a kill potential equal to 100 percent, and a non-killer group which consists of all defect types with a kill potential less than 100 percent. The highest kill potential in the non-killer group was 35 percent with an average of less than 10 percent. 80% 70%
Percent of Defects Classified
F
Killer Non-killer
60% 50% 40% 30% 20% 10%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
0% Figure 2. Accuracy and purity numbers for ADC and manual classifications.
Operator
ADC
Figure 4. Killer and non-killer defect statistics for the ADI excursion
The pareto (figure 3) of defect classifications is ordered by expert classification and includes the classification data from all three sources: Expert, ADC and Manual. 50%
Percent of Defects Classified
Expert
Expert ADC Operator
40% 30% 20% 10% 0%
SF
SX
NV
IF
BB
IM
RR
MM
monitor.
While the ADC classifications produce a similar split of non-killer versus killer defects, the operator classifications favor the non-killer defect types. The more inaccurate operator classifications result in a systematic under-estimation of the impact that the defects are having on electrical yield. This defeats the purpose of the in-line inspection in that elevated yield loss is not discovered until end-of-line electrical testing. By this time, the manufacturing line may be full of substandard yielding material.
OT
Time-to-results Figure 3. Pareto of ADI excursion monitor classifications.
The relative magnitude of the ADC classifications match those of the expert classifier while the operator calls differ greatly in the SX (small defects) and the SF (Foreign Material on the Surface) class categories. This difference was found to be consistent across the five operators involved in the study. Other sources of manual classification errors were systematic. A single operator consistently classified SF defects as RR (Residual Resist) which suggests a need for additional training.
The total time-to-results is defined as the time it takes to acquire data that an engineer can use to start appropriate defect reduction actions1. For manual or automatic defect classification, the total time includes: • the inspection time; • the queue time between inspection and the review station; • the time required to load and align the wafers on a review station; and
Correlation to yield
IBM’s PLY (Photo Limited Yield) methodology for line monitor uses defect kill potentials to monitor the
30
Autumn 1998
Yield Management Solutions
• the time required to perform manual or automatic classification3.
F
Process In-Control
The average total time-to-classification (figure 5) favors the ADC system by a factor of 50. With IMPACT/ Online ADC, the classification immediately follows the inspection step which eliminates the queue time associated with the manual classification. Classification Image capture Set up Queuing
Operator 0
0.2
0.4
0.6
0.8
1
1.2
Time to Classified Results (hours) Review Time Component Queuing Set up Image capture Classification Average Total (hours)
A
Operator
ADC
1 0.15 0 0.034 1.184
0 0.01 0.0001 0.0136 0.0237
Figure 5. Average time-to-classification results of ADC versus the operators.
Quantifying the benefits of defect review
The most obvious benefit of defect review is that it supplies information about the types of defects on a semiconductor wafer. The defect type information assists the yield engineers in identifying the sources of those defects. However, to choose the optimal in-line inspection and review sampling strategy, a method is needed for quantifying the benefits of all available strategies. A cost-based inspection and review sampling model for mean-shift random defect excursions has been developed by the Competitive Semiconductor Manufacturing (CSM) Automated Inspection Focus Study Research Group6 and is the subject of reference3. This economic model may be applied in this case study to quantify and minimize the total defect excursion cost, which consists of the out-of-control cost, the incontrol cost, investigation cost, fixing cost and false alarm cost3, 4, 5. A simple view of the process control dynamics is used (figure 6) to describe the basic premise of the economic model2, 3, 4:
U
R
E
S
Excursion Fixed
Detection Delay
In-Control Cost Elements of Cost
T
Excursion Detected
Excursion Occurs
The time-to-classification is a subset of the total time-to-results that ignores the inspection time.
ADC
E
False Alarm Cost (investigation, fix)
Out-of-Control Cost Investigation Cost
Fixing Cost
Figure 6. Diagram of cost-based sampling model.
• The in-control cost is the product of the cost of baseline yield loss and the duration of time a process is in-control. • The out-of-control cost is the product of the cost of yield loss during an excursion and the duration of time the process is out-of-control. The time a process is out-ofcontrol is the sum of the detection delay, the investigation time and the fixing time. Accurate and timely defect review information will reduce this cost by reducing the detection delay and the time spent investigating the source. • The cost of finding the root cause for an excursion is the investigation or source identification cost. Again, accurate and timely defect review information will reduce this cost by reducing the time spent investigating the source. • The cost of implementing changes to return the process to an in-control state is the fixing cost. • False alarm cost is the cost of reacting to an excursion when the process is actually in-control. The false alarm cost is usually a combination of investigation and fixing costs. The dominant cost in the total cost equation is typically the out-of control cost. The electrical die yield of wafers processed while out-of-control is typically less than the die yield of wafers processed while in-control. This loss of product means loss of revenue that the product would normally generate. To minimize this revenue loss, the amount of time a process runs in an out-of-control state must be minimized. Excursion cost drivers — sensitivity analysis of the economic model
Information from in-line inspection sampling is used to determine whether an excursion has occurred or not1.
Autumn 1998
Yield Management Solutions
31
F
E
A
T
U
R
E
S
SPC Limit
ond two bars in the series compare manual defect classification and ADC as measured in the case study. Note the overwhelming contribution of the beta risk, which accounts for 90 percent of the total detection delay for all four scenarios.
Out-of-Control Distribution
In-Control Distribution
Figure 7. Model of defect distributions — graphical representation
70
Comparing the benefits of various ADI excursion monitor strategies
The data collected for the ADI excursion monitor was used to model the economic benefits of defect review and classification. The total detection delays including the review and inspection times are displayed for various ADI excursion monitor scenarios in figure 8. The inspection portion of the ADC bars include the review time associated with automated defect classification on an average ADI wafer. The first two bars describe a theoretical scenario where the accuracy of classification for manual and automated defect classification is perfect at 100 percent. The sec-
Detection Delay (hours)
50
Review Inspection Beta Risk
40 30 20 10 0
60 50 40 30 20 10 0 100%
90%
80%
70%
60%
50%
40%
Accuracy % Figure 9. Beta risk driven detection delay versus classification accuracy.
As a key driver of beta risk and, therefore, detection delay (figure 9), the accuracy of defect classification greatly affects the cost of an excursion. Looking at the excursion cost in terms of revenue lost per hour (figure 10) determines which in-line monitor strategy is optimal for the ADI process. In the total cost equation (figure 6), source identification time and fixing time play a close second and third to the costs associated with beta risk-driven detection delay. Note that ADC, based on performance measured during the case study, is the most cost-effective classification strategy. By adopting ADC on the ADI excursion monitor, IBM can expect to save over $250 per hour in revenue versus the manual defect classification strategy. This equates to more than $42,000 per week of revenue saved by implementing ADC at one in-line monitor location. Revenue Loss ($/hours)
The uncertainty in making this determination is measured using two risk factors — alpha and beta — both of which are based on the overlap between the in-control and out-of-control defect distributions (figure 7). The beta risk is of primary concern since it determines the length of the detection delay (figure 6), which is the duration of time the process runs in an out-of-control state before it is detected by the in-line control system. The beta risk is represented by the percentage of the out-of-control distribution to the left of the statistical process control (SPC) limit. By increasing the frequency of in-line inspection, the accuracy of the defect classifications, and the size of the review sample, the defect distributions become more distinct and the overlap between the two is minimized. Minimal overlap translates into reduced beta risk driven detection delay which reduces the cost of an excursion.
Detection Delay (hours)
of beta risk.
500
Fixing Source ID Review Inspection Beta Risk
400 300 200 100
MDC (100% accuracy)
ADC (100% accuracy)
MDC (49% accuracy)
0
ADC (73% accuracy)
MDC (100% accuracy)
ADC (100% accuracy)
MDC (49% accuracy)
ADC (73% accuracy)
Figure 8. Total detection delay for various ADI excursion monitor
Figure 10. Revenue loss rate for various ADI excursion monitor strate-
strategies.
gies.
32
Autumn 1998
Yield Management Solutions
F
Conclusions
Using the excursion cost model enabled IBM to quantify the benefits of classification. The exercise revealed that classification, in general, is a vital part of a cost-efficient, in-line monitor strategy. In addition, classification metrics, such as the accuracy and review time associated with classification, determine the cost of an excursion. The key advantages in classification accuracy and time-to-results substantiate the need for on-line ADC as a replacement for manual defect classification on the ADI in-line defect monitor. The revenue losses associated with excursions are reduced by an estimated $42,000 per week by implementing IMPACT/Online ADC at the ADI in-line monitor location. The success at this process monitor, along with that of the entire beta evaluation, has motivated IBM to pursue the implementation of more production monitors using IMPACT/ Online ADC. Future IBM interests include SEMbased ADC and ADC on laser-scattering defect inspection tools. The optical limitations of identifying defects < 0.35 um in size combined with the constant reduction in critical dimensions that come with new process technologies favor an SEM-based ADC solution.
1 Louis Breaux and Dave Kolar, “Automatic Defect
Classification for Effective Yield Management”, Solid State Technology, December 1996, pp. 89 96. 2 Nurani, R. K., R. Akella, A.J. Strojwas,
R.Wallace, M.G. McIntyre, J.Shield, I.Emami, “Development of an Optimal Sampling Strategy for Wafer Inspection”, International Symposium on Semiconductor Manufacturing Proceedings, Tokyo, Japan, June 1994. 3 Wang, E.H., “An Integrated Framework for
Yield Learning in Semiconductor Manufacturing”, Stanford University Ph.D. Dissertation, May 1997, Chapter 3.
4 Wang, E.H. and D. Fletcher, “Optimal Wafer
Inspection Strategy with Learning Effects”, ASMC, October 1996.
6 The Competitive Semiconductor Manufacturing
A
T
U
R
E
S
Research Group consists of professors and Ph.D. candidates from UC Berkley, Stanford and Carnegie Mellon Universities.
circle RS#015
5 Nurani, R.K., R. Akella and A. J. Strojwas, “In-
line Defect Sampling Methodology in Yield Management: An Integrated Framework”, IEEE Transcript On Semiconductor Mfg. 1996.
E
This article was first presented as a paper at the Advanced Semiconductor Manufacturing Conference and Workshop (IEEE/SEMI), Cambridge, MA., September 10-12, 1997.
(CSM) Automated Inspection Focus Study
4 out of 5 Perfectionists Insist On VLSI’s Thin Film Metrology Standards.
NOW: ny”
n New “Ski s for d Standar 5nm 4.5 & 7. ss! Thickne If you’re responsible for thin film thickness measurements, you want them to be right. And you definitely don’t want to be embarrassed by a metrology tool that decides to drift at a critical time. That’s why perfectionists insist on VLSI’s suite of thin-film metrology standards. For silicon dioxide and silicon nitride. The broadest selection in the industry.
And now, oxide standards are available for 4.5nm and 7.5nm! It’s a VLSI exclusive. So if you’re a metrology perfectionist, flaunt it! Call now for your free “Good Enough ISN’T” button along with your free VLSI catalog... VLSI Standards: (800) 228-8574. Or on the Internet: www.vlsistd.com
The Measurement Standards for the Industry.