FPGA Design Meets the Heisenberg Uncertainty Principle
Contributor: Mentor Graphics Corp.
November 5, 2005 -- The classic dilemma of meeting design specifications - on schedule - with increasingly complex devices is now a harsh reality for FPGA designers. Clearly, it has become more challenging to converge on these requirements in terms of both performance and area utilization in high-end FPGA designs. To complicate matters, last-minute design functionality changes after assumed project "completion" have also become more common. One well known company literally wasted months trying to incorporate a single late-coming engineering change order (ECO), because the design team encountered difficulties maintaining the previous timing results after making this specification change. It's like the Heisenberg Uncertainty Principle wreaking havoc in the programmable logic realm!
To eliminate the uncertainties now governing FPGA-based projects, companies must adopt a complete design methodology that provides the right combination of automated and interactive optimizations via logical and physical synthesis techniques, along with the right mix of analysis, debug and ECO capabilities. This powerful combination of user control and flexibility is the only proven route to predictable, accurate and ultimately successful design of leading-edge FPGAs today.
The problem, in no uncertain terms
Traditional RTL synthesis methods began to fall short as FPGAs began to resemble complex platforms containing millions of programmable gates and many embedded functions (including memories, registers, multipliers, DSPs, CPUs, high-speed I/Os and cores). Timing closure solutions using standalone logical synthesis and P&R became an increasingly iterative and non-deterministic process. Without adequate access to physical synthesis, designers typically wrote/re-wrote RTL code, providing guidance to the P&R tools by grouping cells, with some futile attempts at floorplanning. Alternatively, design teams tried numerous P&R runs. Designers iterated through P&R-the most time-consuming step in FPGA design-before gaining any visibility if the changes were a step in the right direction (or only served to worsen the problem). These levels of unpredictability are unacceptable.
So is there a magic recipe for success? Some claim that the answer rests solely in physical synthesis (which is constantly being reincarnated in various avatars). As it turns out, however, even traditional physical synthesis may not suffice.
The need for physical synthesis is not a new revelation in itself. It has been applied effectively in FPGAs for over two years now, thanks to the practical approach taken by innovative tools and technologies, such as Precision Physical from Mentor Graphics, to address the specific structural limitations created by the pre-built nature of typical programmable routing fabrics. Fanout-based delay estimates in FPGAs do not model even a simplified version of physical reality. Optimization decisions based on a wire-load estimate are mostly random choices. Physical proximity is not always directly related to delay. In fact, in the newer generation FPGA designs, net delays can regularly exceed 80% of the total delay.
FPGA design demands advanced analysis
To start with, any good synthesis technology should provide the user with excellent analysis capabilities that suitably complement any automatic pushbutton flow. Because design intent and designer needs vary widely, users should have adequate control and flexibility to perform a wide range of optimizations either automatically or interactively. The availability of hard/soft core resources in FPGAs dictates an iterative decision-making process. Pushbutton synthesis is no longer enough, and interactive synthesis is gaining importance. These tools are driven by an "ASIC-quality" timing engine, where timing analysis and debug - accompanied by the ability to take suitable corrective action - become vital for success.
The designer's visibility must not stop at the post-synthesis technology level. An indispensable weapon in any FPGA designer's toolkit is an interactive analysis environment that goes all the way from RTL to physical implementation. Cross-probing of timing between physical, gate and RTL views enables designers to identify bottlenecks and quickly initiate actions to fix them. Interactive synthesis techniques provide guidance to the designer, allowing "what-if" explorations earlier in the design cycle. A robust synthesis environment also provides a variety of design representations, such as high-level operators, architecture-specific technology cells, etc. Taking advantage of interactive synthesis capabilities provides an earlier understanding of the nature of the design and whether it will (or perhaps will not) meet specifications.
Using the right combination of logical and physical synthesis can certainly be powerful in solving some of the problems, but it is only part of the solution.
As FPGAs design starts ramp up, and especially more so as complex FPGAs are used as SoC replacements, another factor is gaining importance above and beyond traditional physically aware RTL synthesis. The fast-paced, often whimsical nature of the electronics business implies that requirements invariably change late in the design cycle. Dreaded for years in the ASIC arena, these last-minute ECOs typically had a fairly minimal impact on project schedules in the programmable logic space until recently. Now, ECOs are known to cause major delays in FPGA design schedules. Even worse, these late changes may result in unrealized business.
To reduce the inherently catastrophic impact of these late-coming specification changes in complex FPGA designs today, designers should make use of new incremental design methods and advanced ECO flows. These flows limit the scope and impact of a specification change as much as possible, thus minimizing the number of manipulated variables and ensuring a convergent last-minute design change.
Another emergent factor gaining rapid significance in an FPGA world dominated by higher silicon capacities and complexities is the convergent role of design, synthesis and verification technologies. Design flows for today's complex FPGAs now closely resemble those adopted by SoC designers in the late 1990s, where design creation and synthesis is tied to verification at every step of the way. Because of the lengthier design iteration times, there are exorbitantly higher costs associated with discovering and correcting defects during latter stages of the design cycle.
Early discovery of throughput issues requires performing more in-depth analyses of performance and timing issues throughout the synthesis process. Early discovery of timing issues also requires analyses of constraint coverage during synthesis. More importantly, a complete, consistent, vendor-independent synthesis and verification flow allows exploration of the capabilities offered by each of the various available FPGA architectures within a single environment. This enables device-independent IP re-use, and reduces the need to learn device-specific coding techniques and attributes just to carry out an architecture evaluation. It also eliminates the training overhead associated with having to learn multiple design environments.
To help meet specifications, today's FPGA synthesis tools now deliver sophisticated high-level operator extraction and mapping. Efficient RTL structure inference into embedded resources enables optimal usage of ROM/RAM/DSP, while enabling what-if analysis to identify the best implementation. Making the most of these technology advancements in synthesis also reduces vendor-dependent design content, thus easing migration and maintenance efforts.
A successful designer must use a judicious combination of automated synthesis in tandem with interactive analysis to solve complex FPGA design challenges. Combining the logical, timing, and physical worlds into a single intuitive design and analysis environment allows designers to take control of next-generation FPGA implementation and timing issues, slashing design time by weeks (or months) while significantly improving performance. In addition, it is important to enhance existing design environments with specific tools and incremental/ECO flows that can flexibly target the FPGA technology best suited for a given design. Last but not least, designers must take advantage of the convergent roles of design, synthesis and verification to bring defect discovery earlier in the design cycle.
By Daniel Platzker, Product Line Director, Design Creation and Synthesis Division, Mentor Graphics
Daniel started his career as a hardware engineer developing state-of-the-art hardware modeling systems at Daisy Systems, a pioneering EDA company back at the eighties. In the years between Daisy and joining Mentor Graphics, Daniel gained extensive experience in management of high tech companies, including being the Founder and CEO of Tegrity, an e-learning company. Over the last 20 years, Daniel has held executive positions in marketing, engineering, and operations in several high-tech organizations, including the Israeli Department of Defense, Daisy, Tegrity, Castelle and BackWeb. Daniel is a cum laude graduate of Technion's School of Electrical Engineering, Israel, and earned a master's degree in Engineering Management from Santa Clara University, CA.
Reprinted from SOCcentral.com, your first stop for ASIC, FPGA, EDA, and IP news and design information.