July 14, 2006 -- Design for manufacturing (DFM) has received increasing levels of attention at DAC since 1999, and very noticeably so since 2003. Since the term DFM was coined about a decade (or more) ago, its definition has been modified according to the use context of the problem. A wide variety of problems in the areas of design-litho interactions, design rule checking, and enhanced corner-based design methodologies that allow improved analysis, modeling and optimization capabilities have been grouped into DFM sessions at this year’s conference.
This year's DFM session at DAC has two long papers and two short papers.
The first paper titled, "Process Variation Aware OPC with Variational Lithography Modeling," proposes a new approach for performing process window-aware OPC. Conventional OPC performs corrections to layout at best de-focus condition. Focus and exposure variation due to several sources in pattern transfer process and lithography results in intra-die variation even after OPC. Since performing lithography simulation at multiple de-focus conditions during model-based OPC is computationally expensive, this paper proposes an analytical approach for process window OPC. By combining variational (de-focus) lithography model and a new variational edge placement error metric, the proposed approach modifies conventional OPC to reduce linewidth variation due to de-focus and exposure dose. Optimization of OPC across overlapping process windows of all layout features can reduce across-chip linewidth variation.
Enhancements to conventional OPC for handling CD change due to focus and exposure dose variation have been proposed in prior work. However, the runtime penalty of performing lithography simulations at multiple focus and exposure conditions during model-based OPC is significant. To avoid running lithography simulations multiple times, this paper proposes an analytical method for evaluation of aerial images using Hopkin's equation. Hopkin's equation gives the intensity of aerial images reproduced on the wafer by convolving mask patterns with the light source. Conventional OPC performs this at a single de-focus value. The proposed approach extends Hopkin's equation to consider de-focus in lithography. Aerial image of a feature can be expressed as an infinite summation of image intensities at multiple de-focus conditions. The largest moment of the expansion is at nominal de-focus and moments corresponding to other de-focus conditions decrease as the de-focus increases.
The analytical approach to de-focus-aware lithography modeling exploits this relationship in performing fast lithography simulation during OPC iterations. The main objective of conventional model-based OPC is to minimize the edge placement error (EPE) at different fragments of a layout feature. This is achieved by performing edge movements and lithography simulation within some specified number of iterations. Edge movements stop before the maximum iteration count if the solution converges to within the user-specified EPE tolerance. In a variational lithography environment, EPE is a distribution and not a specific number. This paper uses a variational EPE metric that is defined as the mean of EPE distribution at multiple de-focus values within the process window. Standard (edge movement-based) OPC is performed by combining analytical approximation for variational lithography model with the variational EPE metric. Since OPC solution is de-focus-aware, CD variation due to focus is smaller than that of conventional OPC.
Intra-die component is a significant portion of total process-induced variation. Modeling intra-die variations is necessary for statistical analysis and optimization. The next paper titled, "Modeling of Intra-Die Process Variations For Accurate Analysis and Optimization of Nano-scale Circuits," proposes a new approach for efficient modeling of intra-die variations. In the presence of intra-die variations, parameters of every device have to be treated as a random variable. Any analysis considering correlated random variables corresponding to devices in the design is computationally prohibitive. To reduce the number of correlated random variables, statistical analysis and optimization techniques divide the layout into a grid and treat random variables between different grid locations as uncorrelated. By applying principal component analysis (PCA), the set of correlated variables within the same grid location are converted to uncorrelated random variables. Although this method reduces the dimensionality of the variability space, it suffers from a big drawback – the accuracy of analysis and optimization depend on how the grid is created. An open question is determining the optimum size of a grid location that can capture correlations efficiently. The objective of this paper is to overcome the problem with grid-based methods using the Karhunen-Loeve Expansion (KLE).
Using the KLE approach, random variables associated with each device are modeled as stochastic processes over the entire die. This effectively transforms the random variables associated with any two devices into correlated random variables. The correlations, captured by a covariance function can be expanded using the KLE. KLE provides a mechanism for expressing a stochastic process as a series expansion of uncorrelated random variables. Moments (i.e., coefficients) of the terms in the expansion are evaluated by solving an integral equation involving the covariance function. For practical purposes, the expansion can be truncated when the Eigen values of covariance function reach a user-specified threshold. This method is computationally more efficient than the principal components analysis (PCA) approach (O(N^3), where N is the number of grid locations on the die), since it is based on analytical solutions for the integral equations.
Statistical design has been a subject of very active research in the past few years. However, these techniques have not yet evolved to a stage where they can be applied on multi-million gate designs common in current technology nodes. Modeling and optimization techniques that can reduce the complexity of statistical design will enable faster adoption of these techniques.
Resolution enhancement techniques (RET) and process optimizations provided relatively tight process control in 90nm as well as 65nm allowing use of corner-based design methodologies in these technology nodes. Aggressive RET combined with high NA lithography along with use of restricted design rules (layout pitch optimized for litho equipment, process recipes) will allow extension of this methodology in 45-nm technology node. Statistical analysis and optimization techniques will be a necessity in main stream designs starting 32-nm technology nodes where physical gate length is on the order of 10nm. Before adoption of a purely statistical design analysis, corner-based methodologies should account for process variations and be accurate in order to reduce excessive pessimism. The next paper in this session titled, "Computation of Accurate Interconnect Process Parameter Values for Performance Corners under Process Variations," addresses this topic.
Process corner analysis assumes that the worst-case and best-case values of process parameters coincide with the performance corners, i.e., best-case and worst-case delay of a stage. This paper shows that interconnect process corners may not coincide with actual interconnect performance corners. Given the nominal process parameters, the proposed methodology performs fast static timing analysis to determine correct performance corners of a given stage and bounds on their variation. Interconnect performance corners are typically fixed based on the delay model used for timing analysis. However, the actual worst-case and best-case delay of interconnects in a block do not correspond to any of the performance corners. They actually lie inside the region defined by the corners. Actual process corners are determined by equating interconnect delay (evaluated from STA) to the analytical delay model containing the unknowns. Methods similar to the proposed approach can reduce pessimism in corner-based methodologies.
The last paper in the session discusses standard cell characterization in the presence of lithography induced process variations. Intra-field linewidth variation interacts systematically with de-focus in lithography. In some earlier papers at DAC, timing pessimism reductions of up to 40% have been demonstrated by accounting for systematic linewidth variation.
Spice corners used for characterization of standard cell timing and leakage do not consider the systematic dependency. For worst-case corners, Spice models consider maximum linewidth deviation (either positive or negative) from the nominal value to in the design context, across controllable process window. The worst-case linewidth deviation of devices at the level of individual standard cells is significantly lower. This paper proposes a standard cell characterization methodology that exploits systematic interaction between layout pitch and de-focus. Linewidth of dense lines increases with focus whereas that of isolated line decreases. By capturing the impact of focus on post-OPC linewidth of standard cell devices, the spread between best-case and worst-case corners can be decreased significantly.
Systematic linewidth change caused by focus is captured by performing lithography simulation on standard cells at different de-focus conditions. The main component of the proposed methodology is the analysis of the impact of non-rectangular device geometries on timing and leakage of the standard cell. Existing Spice models cannot model non-uniform geometries. In order to overcome this limitation, litho simulated contour of each device is approximated with a set of rectangles of uniform height but different widths. To determine the IV characteristics of each rectilinear component of the device, pre-characterized lookup tables are used. IV lookup tables are constructed by running Spice simulation on different device types with varying linewidths. The total current of non-rectangular printed device is obtained by taking a summation of currents for all rectilinearized components. Timing can be characterized by modifying the values of linewidth for each device in the Spice netlist for each standard cell. By considering systematic variation, reductions between 7% and 25% were demonstrated for standard cells in the 65-nm technology.
Standard cell instances in the design occur in different placement contexts. Linewidth impact of de-focus is different on different instances of the same standard cell since the optical interactions vary with the placement context. This variability in representation of a standard cell can results in an increase in characterization effort since there can be many possible variants of the same cell. Minimization of inter-cell optical interactions is the key for minimizing the variability among different copies of the same cell. The paper proposes a dummy poly insertion methodology to reduce inter-cell optical interactions. These dummy poly act as "shields" for proximity effect. In an ideal situation, OPC solution of a standard cell along with dummy poly can be used in all its instantiations in the design without concern for inter-cell proximity effect. The main advantage of this method is the reduction in characterization effort for each cell. The paper demonstrates a reduction in timing and leakage pessimism by combining systematic variation-aware timing/leakage analysis and proximity shielding.
DFM sessions have matured significantly since their inception at DAC-2003, and have kept pace and relevance with industry developments, particular in the RET arena. Starting with initial ideas on studying the links between OPC choices and timing (2003), to understanding of impact of systematic interactions between layout and de-focus on performance and power (2004), to use of post-lithography feature shapes for timing and power analysis (2005), DFM papers have tried to introduce manufacturing awareness into design. Two papers in this year address methods of increasing robustness of layouts to lithography-induced systematic variation. At a high level, the DFM papers at DAC have moved from exploration of basic links between D and M, to analysis of impact of lithography, and finally to techniques for mitigating systematic impact of lithography.
Improvements in RET and post-lithography design analysis techniques have helped mitigate or compensate significant fraction of systematic variation. Currently, there is a need to understand whether we have systematic variation under control from both design and manufacturing perspectives. If not, then all sources of such variation must be thoroughly understood and accounted for. Sources such as flare, etch micro/macro loading are known to cause layout-dependent variation that is partly systematic. Modeling and compensation of these effects might be important before moving to pure statistical design. Even if some sources of systematic variation are known, modeling them might be too complicated for both process and design. Statistical modeling using process inputs and using it to drive "practical" statistical design might be the flavor of things to come.
By Prof. Andrew B. Kahng
Andrew B. Kahng is professor of CSE and ECE at UC San Diego. He has served as founding General Chair of the International Symposium on Physical Design, and technical program co-chair of the 2004 and 2005 Design Automation Conferences. From 2000 through 2003, he chaired both the U.S. and international working groups for Design technology for the ITRS roadmap. He has also served on the executive committee and as a theme leader for the MARCO Gigascale Systems Research Center since its inception in 1998. Since October 2004, Prof. Kahng has been on leave of absence from the university, serving as chairman and CTO of Blaze DFM, a company that seeks to provide new cost and yield optimizations in the VLSI design-to-manufacturing interface.