"Those who fail to plan, plan to fail." This is certainly true when it comes to successful verification, where experience repeatedly demonstrates that success depends on methodical verification planning combined with systematic verification processes.
With the emergence of assertion library and language standards design teams are discovering a significant value to integrating formal into their verification flow. Yet there is a huge disconnect between attempting to write an ad hoc collection of assertions versus implementing a systematic approach to formal verification planning, which includes the strategic writing of a comprehensive set of assertions. It is important to make a habit of the latter: teams must plan for success.
In order to get there, it is necessary to review the existing challenges to achieving verification closure and the inherent limitations of both simulation and raw formal verification algorithms. Once these are established, it is possible to propose a methodology solution for them in the form of a verification process that includes formal verification as a key component.
The important thing is to adopt this solution in the form of a formal test plan consisting of seven systematic steps, including an effective strategy to verify each property defined in the formal test plan. Generally, this strategy is influenced by specific verification goals, project schedules, and resource constraints.
Foibles of system centric simulation and their remedies
A few years ago, I interviewed 54 ASIC and SoC design houses, representing various market segments, in order to assess best practices in the industry. I learned that only about one third of them performed any form of comprehensive subsystem-level simulation, resulting in obstacles to verification closure. By performing only system-level simulation, these companies experienced poor simulation controllability and observability as well as major debugging bottlenecks.
I found that this problem is exacerbated in the late stages of the design flow due to increasing demands on the designer, including synthesis, timing closure, power closure, floorplanning, place-and-route, and many other physical design issues. The result is a progressive reduction in the time designers can contribute to the debugging process over the life of a design project. This phenomenon can be expressed as a bug-rate to team debug capacity curve, shown in Figure 1.
Figure 1. Organizations that focus their verification effort predominantly on system-level simulation experience a drop in debug capacity over the life of a project.
In contrast, I found that organizations that invested in comprehensive subsystem-level simulation generally experienced better use of their designers’ debugging capacity. In addition to improved controllability and observability, which reduce debugging time, these companies were able to explore design complexity sooner in the flow and thus minimize the deleterious effects of late-stage bugs on existing RTL.
Figure 2. Comprehensive subsystem-level simulation improves debug capacity.
Another opposing force to achieving verification closure is the resource investment required to create a simulation-based infrastructure. In fact, often the simulation testbench is as complicated as, or many times more complicated than, the design itself. Thus, verification typically starts later in the flow, just as the designers debugging capacity decreases. This can be remedied by integrating formal verification into the flow.
This allows verification to begin concurrently with the design process. In fact, the infrastructure (that is, assertions and constraints) required to perform static formal verification is significantly easier to create than the infrastructure required to create a comprehensive simulation-based testbench. Further, by employing dynamic formal verification, these companies improve coverage and overcome the inherent limitations of both simulation and formal verification raw algorithms.
Figure 3. Integrating formal verification into the flow maximizes debugging capacity.
Although it is important to have a high-quality simulator and a high fidelity coverage model, there are inherent limitations to simulation-based algorithms that prevent us from achieving high coverage. Furthermore, raw formal verification algorithms have inherent limitations that prevent us from representing huge state-spaces. Yet, as it turns out, we can combine simulation and formal verification to address each other’s inherent limitations.
For example, we can write an assertion that states that it is not possible to overflow a FIFO. During formal verification, we would waste many cycles just filling up the FIFO to get to the boundary case, and could potentially run into state-explosion. However, if we simulate out to an interesting coverage state (for example, when the FIFO is almost full), then apply formal verification around this interesting state, we are able to explore a much deeper design space than possible with static formal verification, while improving our overall verification coverage. This approach is known as dynamic formal verification, pioneered by the 0-In® Functional Verification Business Unit of Mentor Graphics®.
Instead of wasting static formal verification cycles attempting to fill up the FIFO, we can efficiently run a simulation to where the interesting coverage event occurs and let the dynamic formal tool extract the interesting state. Then, we can reuse this interesting state as a new initial state for a formal search. Hence, dynamic formal is able to overcome inherent limitations of both simulation and formal verification’s raw algorithms to bug hunt and improve verification coverage.
The verification plan
There is significant value in integrating formal into a project’s verification flow. However, in order to achieve high quality results, it is critical that companies implement an effective verification flow that includes a systematic approach to writing a comprehensive set of assertions. This is the verification plan.
A project’s verification plan is the specification for the verification process. Developing it usually involves the entire engineering team (architects, designers, and verification engineers). In general, the verification plan defines exactly what functionality will be verified, how it will be verified (the verification strategy and resource allocation), and when the verification process is complete (for example, metrics for measuring progress or completion criteria). Without a verification plan, it is unlikely that a verification team will achieve first-time verification success in terms of schedule and quality.
Here are the steps, or good habits, I recommend to create a formal-based verification plan :
- Identify candidate blocks. Determine good candidate blocks for formal verification. Those blocks with high concurrency are good candidate blocks for formal. Blocks involving data transformation (for example, an MPEG decoder) are not good candidates for formal.
- Document the interfaces. Create a block-diagram and table that describe the details for the interface signals that must be referenced (monitored) when creating your set of formal properties. You will use this list to determine completeness of the requirement checklist during the review process.
- Describe the block. Briefly describe the key characteristics of your block. The overview does not have to be greatly detailed, but it should highlight the major functions and features.
- Capture a requirements checklist. In a natural language, list all requirements for your block. Use a unique label identifier for each requirement which helps map your assertions back to the natural language requirements.
- Formalize the properties. Convert each of the natural language requirements into a set of OVL, PSL, or SVA assertions, using any additional modeling required to describe the intended behavior.
- Define coverage goals. Define coverage targets that are helpful in determining the quality of the constraints used during a proof (for example, identifying over-constrained situations).
- Select a verification strategy. Detail one of the verification strategies described below for each requirement listed in Step 4.
Figure 4. The formal test planning process.
Four formal verification strategies
Before you select a strategy, you should first prioritize your list of properties (created in Step 4) to help you identify the high-value properties that have a clear ROI and the potentially high-effort properties that may be difficult to prove or lack designer support. To help order your list of properties, answer the following questions:
- Did a re-spin occur on a previous project for a similar property? (high ROI)
- Is the verification team concerned about achieving high coverage in simulation for a particular property? (high ROI)
- Is the property control-intensive? (high likelihood of success)
- Is there sufficient access to the design team to help define constraints for a particular property? (high likelihood of success)
After ordering your list, assign an appropriate strategy for each property in the list based on your project’s schedule and resource constraints. Your verification goals, project schedule, and resource constraints influence the strategy you select. I recommend you choose from one of the following four strategies.
- Full proof. Projects often have many properties in the list that are of critical importance and concern. For example, to ensure that the design is not dead in the lab, there are certain properties that absolutely must be error free. These properties warrant applying the appropriate resources to achieve a full proof.
- Bug-hunting. Using formal verification is not limited to full proofs. In fact, you can effectively use formal verification as a bug-hunting technique, often uncovering complex corner-cases missed by simulation. The two main bug-hunting techniques are bounded model checking, where we prove that a set of assertions is safe out to some bounded sequential depth; and dynamic formal, which combines simulation and formal verification to reach deep complex states.
- Interface formalization. The goal of interface formalization is to harden your design’s interface implementation using formal verification prior to integrating blocks into the system simulation environment. In other words, your focus is purely on the design’s interface (versus a focus on internal assertions or block-level, end-to-end properties). The benefit of interface-formalization is that you can reuse your interface assertions and assumptions during system-level simulation, dramatically reducing integration debugging time.
- Improved coverage. Creating a high-fidelity coverage model can be a challenge in a traditional simulation environment. If a corner-case or complex behavior is missing from the coverage model, then it is likely that behaviors of the design will go untested. However, dynamic formal is an excellent way to leverage an existing coverage model to explore complex behaviors around interesting coverage points. The overall benefits are improved coverage and the ability to find bugs that are more complex.
A habit of success
A verification process that includes formal verification as a key component is critical to reverse the degradation of debug capability, which is a serious impediment to verification closure. All it takes is a little planning using the seven systematic steps and four formal verification strategies discussed here to effectively integrate formal into your design flow.
According to Aristotle, “We are what we repeatedly do. Excellence, then, is not an act, but a habit.” Verification excellence is no different. The habit of formal-based verification planning is fundamental to your success.
By Harry Foster, Principal Engineer, Mentor Graphics Corp.
 H. Foster. L. Loh, B. Rabii, V. Singhal, “Guidelines for creating a formal verification testplan”. In Proc. DVCon 2006, San Jose, CA.
Go to the Mentor Graphics Corp. website to learn more.