August 16, 2007 -- In EDA, two primary areas still have room for innovation. One is at high levels of abstraction (above RTL), where a stalwart band of startups as well as one or two of EDA's heavier commercial hitters continue to seek a solid and predictable path from an idealized functional specification to a concrete physical representation of a complex system-on-chip (SoC) design.
The other fertile ground for innovation sits at the extreme back end of the design cycle, where functionally verified netlists cross the chasm into GDSII. At this point, squirrely physics come into play when the features being printed on silicon are smaller than the wavelength of the light used to pattern them. As silicon feature sizes have slid down the scale from the submicron range to the nanometer realm, the well-defined geometries crafted by designers into a layout become more difficult to preserve in photolithography (see the figure).
There's little argument that post-layout processing techniques such as reticle enhancement technology (RET), optical-proximity correction (OPC), and the like begin to run out of steam at nanometer scales. Rather than try to correct the problems after actual design work is completed, it's self-evident that process knowledge must be built into the design flow itself. The question is determining how to accomplish that goal. How are designers to overcome the yield and process-variability issues that can overwhelm their designs at 90 nm and down?
By David Maliniak, Electronic Design Technical Editor
This brief introduction has been excerpted from the original copyrighted article.
View the entire article on the Electronic Design Magazine website.