April 20, 2011 -- Although simulation-based coverage closure processes are generally well understood (yet hard to achieve) the process for achieving formal property checking proof closure is not well understood. In certain cases, the engineer is stuck trying to answer the question: What do I do now about those properties not easily proven? This article introduces a process (consisting of a set of recommendations) for achieving static formal verification closure. This process complements existing simulation processes.
We begin by introducing steps that will help you determine if your candidate block is even appropriate for formal verification — prior to attempting a proof. We then discuss multiple techniques you can use to achieve both simulation and formal verification closure. Using formal to aid simulation, and using simulation to aid formal are common themes throughout our article. With this in mind, we introduce multiple strategies that combine formal and simulation techniques to increase verification confidence on properties deemed too difficult to prove. Finally, we introduce formal-friendly property-coding recommendations and more advanced abstraction techniques to help you achieve a full proof.
Using formal verification and simulation together
In his landmark 1986 article titled, No Silver Bullet: Essence and Accidents of Software Engineering, Dr. Fred Brooks observed that the inherent complexity of today's software systems are derived from four elements: (a) the complexity of the problem domain, (b) the difficulty of managing the development process, (c) the flexibility possible through software, and (d) the problems of characterizing the behavior of discrete systems. Certainly, it can be argued that there is no silver bullet when it comes to hardware verification closure as a result of the same inherent complexities.
So what do we mean by there is no silver bullet when it comes to hardware verification? It is simply this: there is no single tool or process we can apply to all classes of design components to achieve verification closure. For example, even the simplest 32-bit combinatorial comparator can take hundreds of thousands of years to exhaustively simulate — even with the fastest simulators on the market. Static formal verification enables us to automatically explore all possible corner cases for designer-sized blocks using exhaustive mathematical techniques versus incomplete simulation techniques (for example, we could exhaustively prove the 32-bit comparator in a matter of seconds). Yet static formal verification suffers state explosion for large designs, and it is not applicable to all classes of design components (for example, an MPEG decoder is not a good candidate for static formal verification). Since there is no silver bullet when it comes to hardware verification, and given that today's ASIC design flows often involve aggressive development schedules combined with limited verification resources, you must choose your set of verification "weapons" wisely and apply them in a timely fashion.
An important element of any verification planning process is defining a strategy to achieve closure. In this article, we present a process that combines formal verification with simulation to help you achieve verification closure. The guiding principle we use to achieve closure is to use formal verification to aid simulation and use simulation to aid formal verification. That is, we will use the inherent strengths of one process to overcome the weaknesses of the other.
Coverage is a measure of verification completeness. However, to measure anything, we need metrics. Unfortunately, just as there is no silver bullet when it comes to verification processes, we do not have a single set of metrics to completely characterize the functional behavior of a design. Hence, a coverage space for today's designs is a multi-dimensional, orthogonal set of metrics. Line coverage, for example, is an implicit metric that can automatically be extracted from an HDL model. It forms one dimension of a coverage space. Functional coverage is an example of a higher-level explicit metric that forms another dimension of a coverage space. Assertion (or property) coverage is another example of an explicit metric that forms yet another dimension to our coverage space. There are obviously many other possible dimensions (that is, metrics) we could define to help us approximate the design behavior. To achieve our goal of verification closure, we strive to maximize each dimension of our coverage space during the course of verification.
In a dynamic-based simulation flow, the process of achieving closure related to functional coverage often involves the following steps:
- a. Define a coverage model.
- b. Simulate and gather coverage results (see Figure 1).
- c. Review holes within a coverage space with respect to a coverage model and order coverage holes in terms of criticality (see Figure 2).
- d. Bias simulation toward specific holes by tweaking random constraints.
We repeat steps (b) through (d) until we can make little forward progress (see Figure 2 large holes and small holes phases).
- e. Refine (or remodel) the coverage model, removing coverage items impossible to hit or deemed unrealistic to hit under a given set of schedule and resource constraints.
We repeat steps (b) through (e) until we can make little forward progress. For large holes (critical holes) too difficult to hit using constrained random techniques, the process might include the following additional step:
- f. Target coverage holes by writing a set of special directed tests.
Figure 1. Coverage closure phase.
Figure 2. Functional coverage input space.
While the process of dynamic simulation closure is well understood in a traditional verification environment (although not necessarily easy to achieve), the process of static formal verification closure is generally not well understood due to the lack of industry formal experience and published proof closure guidelines. As verification teams consider the option of integrating functional formal verification tools into their flow, the lack of formal proof closure guidelines often results in ad hoc verification results with a questionable return on investment. Thus, planning for formal closure and understanding your options when you encounter complexity is critical in projects with a fixed time and resource budget. The section entitled "Practical approaches to achieving closure" later in this article introduces a process to help achieve verification closure that includes both simulation and formal verification.
Controllability vs. observability
In functional verification, there are two aspects of coverage that must be understood when planning for verification closure: controllability coverage and observability coverage.
Controllability coverage is a measure of the quality of the input stimulus' ability to activate (or exercise) various structures within a design (line coverage, for example), or expected design behaviors (functional coverage, for example). In fact, most of today's simulation-based metrics actually measure controllability.
Unlike simulation, formal verification does not depend on generating input stimuli since it explores the entire input space of the design using mathematical techniques (without a need for input vectors or simulation). This means that if a property is proven true using formal verification, there is no sequence of input vectors you can simulate that would expose a corner-case bug. Hence, there is no need for metrics that measure the quality of the input stimulus' ability to activate structures within a design (since the entire valid input space is considered).
The risk with formal verification is that a proof might have completed with a set of formal constraints that restricts the input space to a subset of possible behaviors. Therefore, it is important to validate your constraints.
The other key aspect of coverage that you must consider during verification is observability coverage. That is, are there sufficient checkers within a simulation testbench to validate all the expected behaviors? In assertion-based verification, observability is often measured with assertion density metrics (which help determine if there are structures within the design that are not covered by an assertion).
Recommendation 1. Make assertion density (a key metric) a part of your overall coverage strategy when adopting either dynamic or static assertion-based verification techniques.
Assertions and measuring proof progress
Successful application of formal verification depends on high-quality assertions as well as sufficient access to the design team responsible for the block you wish to prove. Creating a high-quality set of assertions begins with planning. For example, creating a natural language list of properties is an important first step during formal verification planning. You can then convert the list of properties into a set of assertions, thus providing proof targets. This approach provides a direct means to measure progress throughout the overall verification process simply by tracking a set of proved natural language properties contained within the verification plan.
Recommendation 2. When applying formal, ensure your assertions are high-quality and that you have sufficient designer access to the design blocks you want to target.
Where to apply formal verification
Traditionally, applying formal property checking has been viewed as a process that is orthogonal to simulation-based approaches. In contrast, our philosophy is that the two processes are complementary. However, before applying formal verification, it is necessary to understand classes of designs that lend themselves to formal property checking and those that do not.
We can classify design blocks as either sequential or concurrent, control or datapath, and data transport or data transform. Based on their classification, we can identify types of design blocks amenable to formal and types of design blocks that are not. This classification was first proposed as a key step in "Guidelines for Creating a Formal Testplan" , and is summarized here.
Sequential vs. concurrent designs. A key criterion for choosing design blocks suitable for formal is whether the block is mostly sequential (that is, non-concurrent) or mostly concurrent.
Sequential design blocks (see Figure 3) typically operate on a single stream of input data, even though there may be multiple packets at various stages of the design pipeline at any instant. An example of this sequential behavior is an instruction decode unit that decodes a processor instruction over many stages. Another example is an MPEG encoder block that encodes a stream of video data. Formal verification usually faces state explosion for sequential designs because, in general, most interesting properties involve a majority of the flops within the design block.
Figure 3. Sequential paths.
Concurrent design blocks (see Figure 4) deal with multiple streams of input data that collide with each other. An example is the PCI Express Data Link Layer block, which essentially transports packets unchanged from multiple input sources to multiple output sources.
Figure 4. Concurrent paths.
Control vs. data transport vs. data transform blocks. Given the discussion above, the following coarse characterization often helps in determining whether formal is suitable. We can generally characterize design blocks as control- or datapath-oriented. We can further characterize datapath design blocks as either data transport or data transform (see Figure 5).
Figure 5. Data verification flow.
Again, a PCI Express Data Link Layer block is an example of a data transport block. Data transform blocks perform a mathematical computation or an algorithm over a stream of input values (for example, an MPEG encounter or a floating point unit).
Recommendation 3. When applying formal, choose blocks that are control-oriented or perform data transport with high concurrency.
How to apply formal verification
There is a range of degrees of formal verification (formal verification strategies) that you can successfully apply to your designs that includes improving coverage, formalizing interfaces, bug-hunting, and full-proof. However, prior to choosing a strategy, we recommend that you classify the key properties identified in your verification plan.
Recommendation 4. Order your list of properties to help identify high-value properties with a clear ROI — or potentially high-effort properties that may be difficult to prove.
To help order your list of properties, answer the following questions:
- Did a respin occur on a previous project for a similar property? (high ROI)
- Is the verification team concerned about achieving high coverage in simulation for a particular property? (high ROI)
- Is the property control-intensive? (high likelihood of success)
- Is there sufficient access to the design team to help define constraints for a particular property? (high likelihood of success)
After ordering your list, assign an appropriate strategy for each property in the list based on your project's schedule and resource constraints. Your verification goals, project schedule, and resource constraints influence the strategy you select. We recommend you choose a strategy from the following four.
1. Full proof. Projects often have many properties in the list that are of critical importance and concern. For example, to ensure that the design is not dead in the lab, there are certain properties that absolutely must be error-free. These properties warrant applying the appropriate resources to achieve a full proof.
2. Bug-hunting. Using formal verification is not limited to full proofs. In fact, you can effectively use formal verification as a bug-hunting technique, often uncovering complex corner cases missed by simulation. The two main bug-hunting techniques are bounded model checking, where we prove that a set of assertions is safe out to some bounded sequential depth, and dynamic formal, which combines simulation and formal verification to reach deep complex states. (Section 4.2 provides additional details about dynamic formal.)
3. Interface formalization. The goal of interface formalization is to harden your design's interface implementation using formal verification prior to integrating blocks into the system simulation environment. In other words, your focus is purely on the design's interface (versus a focus on internal assertions or block-level, end-to-end properties). The benefit of interface formalization is that you can reuse your interface assertions and assumptions during system-level simulation to dramatically reduce integration debugging time.
4. Improved coverage. Creating a high-fidelity coverage model can be a challenge in a traditional simulation environment. If a corner case or complex behavior is missing from the coverage model, then it is likely that behaviors of the design will go untested. However, dynamic formal is an excellent way to leverage an existing coverage model to explore complex behaviors around interesting coverage points. The overall benefits are improved coverage and the ability to find bugs that are more complex.
Practical approaches to achieving closure
As we stated at the beginning of this article, there is no silver bullet when it comes to verification. During the course of verification — whether you are trying to achieve simulation coverage closure or a complete proof using static formal — there comes a point in the process at which you must ask yourself, "Is there a clear return-on-effort to continue with the existing approach?" In other words, given infinite resources and time, it should always be possible to achieve coverage or proof closure; however, experience has demonstrated that time is generally an engineer's most precious resource. Hence, understanding the various options and alternatives for achieving closure is important.
In this section we discuss approaches that can help you achieve verification closure and alternative solutions that you can apply to improve confidence when the return-on-effort for your preferred approach is too great. We begin by discussing techniques that can, in many circumstances, be effective in achieving proof closure in static formal verification. We then discuss an approach that uses simulation to assist in achieving formal verification property closure. Finally, we discuss an approach that uses formal verification to assist in achieving simulation-based coverage closure.
Static formal closure techniques
In the ideal situation, you are delighted to find that your formal verification tool is able to automatically prove the specified properties. However, there will be times when you might be less than delighted, and you wonder, "What should I do next when the proof is too complex?" In this section, we address a number of items you can apply to help you achieve formal proof closure, assuming your strategy is a full proof (as previously discussed).
SVA and PSL are expressive property/ assertion languages. However, to prevent inconclusive proofs, take care when coding your properties. In many cases, engineers create overly complex assertions, which contribute to state explosion during static formal verification or detrimentally impact simulation performance.
For example, given the simple property a request must stay asserted until a grant, the engineer might be tempted to write in SVA:
@(posedge clk) disable if f (reset)
$rose(req) |=> req[*0:2000] ##1 gnt;
With this approach, the engineer has selected a large constant (2000) in which the req could possibly be asserted. However, the problem with this approach is that the large constant forms a bound for a counter during the formal search process. This coding can be inefficient for certain formal algorithms (incrementing a count while not making any forward progress during a search). In fact, it is generally better (depending on the formal algorithm employed) to code this type of assertion as an unbounded value, as follows:
@(posedge clk) disable iff (reset)
$rose(req) I=> req[*0:$] ##1 gnt;
However, in some tools, even this form of coding is problematic. Technically, this form of coding results in a liveness property in formal verification because the infinite loop associated with req is on the right-hand side of the implication. Hence, if the antecedent expression is satisfied, then the consequence (involving the infinite loop) must be satisfied during the formal search. In general, liveness properties are much harder to formally prove than safety properties and you should avoid them. (In general, you can recode most practical liveness properties as safety properties, which are generally more efficient in formal verification and provide a check for simulation. Also, many tools will automatically convert a liveness property into a property with a smaller finite bound, which may or may not be what you want. Hence, we suggest you avoid coding liveness properties.)
An effective alternative is to recode this property as a safety property, which will perform better in formal verification. Unfortunately, the formal-friendly property is not necessarily intuitive:
@(posedge clk) disable iff (reset)
$rose(req) ##1 (~gnt)[*1:$] I-> req;
In this case, the loop has been moved to the left-hand side of the implication.
It is a good idea to avoid specifying infinite sequences (for example, repetitions involving $, like [*0:$]) on the right-hand side of an implication. Whenever possible, it is generally better to specify an actual sequential bound within a repetition operator (which can also improve the performance of the assertion in simulation).
Recommendation 5. A project team should create a set of formal-friendly reusable named properties or use assertion libraries to avoid inadvertently coding formal-unfriendly properties.
Reusable named properties and assertion libraries simplify maintenance, allowing you to address performance issues in a single location. Two examples of formal-friendly reusable named properties that you might want to add to your library include properties related to until and before behaviors:
@(posedge ck) disable iff (rst)
$rose(P) ##1 (~R)[*1:$] I-> Q;
@(posedge ck) disable iff (rst)
$rose(P) ##1 (~(Q & ~R))[*1:$] I-> ~R;
Returning to our example property a request must stay asserted until a grant, we can code in SVA:
As a guideline, we recommend coding assertions as clearly and simply as possible. Whenever possible, partition complex property expressions into a simpler set of assertions. This approach generally results in improved performance in both formal verification and simulation.
Formal assumptions (often referred to as constraints) are necessary to partially model the environment of a design and constrain the behavior explored during a proof to valid behavior. However, assumptions can impact the performance of the static formal verification tool in either a positive or negative way. For example, complex temporal assumptions will actually increase the proof complexity, which can yield inconclusive results. Too many assumptions potentially result in over constraining the design. It is important, therefore, to validate your assumptions in simulation to identify proofs that were potentially over constrained.
Recommendation 6. Validate all constraints as assertions as you integrate your blocks into your simulation environment, or prove your constraints on a neighboring block if you have sufficient time.
Assumptions are necessary to eliminate false counterexamples (which prove a property false due to incorrect interactions from the environment). In general, static formal tools perform best with a minimal number of simple assumptions.
Recommendation 7. You can obtain the best proof results in terms of performance and capacity with a minimum number of constraints. Begin your proofs without any constraints, and then add constraints incrementally to eliminate counterexamples.
Managing complexity through strict mode. Often, engineers prefer to focus on separately verifying various modes of operation (that is, functionality) for their designs. For example, the engineers might have partially completed some functionality in the design they wish to verify (while other functionality remains incomplete). Alternatively, engineers might partition the set of properties they wish to prove into various design operating modes. This approach, described in , lets engineers initially disable portions of the design's functionality (that is, restrict or over-constrain the design to a single mode of interest) and then gradually enable additional functionality after completing a restricted proof.
A strict mode restriction is a special type of assumption in that it constrains the design behavior explored by the formal verification tool to a restricted subset of design behaviors. For example, a strict mode restriction may reduce a large set of opcodes to a smaller set of opcodes during a proof. Or a strict mode restriction may limit the input behavior to only read transactions during one phase of a proof, and then separately re-prove the design for write transactions.
Although a strict mode restriction (that is, the constraint) cannot be proved on a neighboring block (or used as an assertion during simulation since it limits the design behavior to a subset of behaviors), it is often an effective way to reduce complexity — thus enabling us to find bugs within complex designs.
Recommendation 8. Use a naming or labeling convention to quickly identify constraints that are actually strict mode restrictions.
It is important to be able to identify properties that have been proven using general assumptions versus properties that have been proven using strict mode restrictions as constraints. We suggest you either name your properties with a prefix r_ for any restriction that will be assumed, or you might adopt a SystemVerilog or PSL labeling convention to help you identify assumptions that are actually strict mode restrictions.
A number of excellent technical articles discuss advanced abstraction techniques (for example,). In particular, provides a static formal verification example of an "out-of-order" processing unit using multiple advanced abstraction techniques. We suggest you read these advanced references if you selected a full proof strategy and you are having problems achieving closure with your proofs.
A detailed discussion of advanced abstraction techniques is beyond the scope of this article. Instead, our goal is to sketch out the common techniques that are available, many of which do not require an advanced understanding. It is critical to point out that sufficient access to the design team is imperative when attempting to achieve a full proof on a design block (particularly when an abstraction is required). In fact, without sufficient access, your efforts to prove all assertions (complex or easy) for a targeted design block will likely be incomplete.
In general, abstraction is the process of extracting the underlying essence of a concept, removing any dependence on real-world objects with which it might originally have been connected, and generalizing it so that it has wider applications. In electronic design, for example, it is not uncommon to perform RTL simulation by using a simple unit delay model to represent time. This form of abstraction allows the engineer to focus on functional aspects of the design while ignoring the detailed timing information (which can be verified more efficiently in a separate process).
In formal verification, abstraction is a common technique used to address complexity. Many engineers fear the concept of abstraction because they are concerned that the abstraction might introduce an error that would lead to the property proving true when it is really false (also referred to as a false positive). However, there is a class of abstractions known as safe abstractions (localization and decomposition, for example). With this class of abstractions, if the property is proven true on the abstracted model, then the property is guaranteed to be true on the real model. The reason for this is that (for this class of safe abstractions) the behavior described by the abstract model is a superset of the actual behavior described by the real model.
Decomposition and localization. For designs with very large state spaces, often you must decompose the design into sub-components to achieve manageable complexity. In fact, identifying coarse-grain decomposition is often straightforward. You can achieve it by simply following the component structure of the design. Using this decomposition, we can at times prove properties on the smaller pieces while abstracting other components.
One form of decomposition is known as assume-guarantee. For this form of decomposition, unlike general decomposition, we add assumptions to the input of a decomposed block and use them to prove properties on that block. Next, we convert the assumptions to assertions, which we then prove on the neighboring block (see Figure 6).
Figure 6. Assume-guarantee decomposition.
Another form of fine-grained decomposition is known as localization. To demonstrate this concept, consider the design illustrated in Figure 7.
wr_en I=> ((data_out == $past(data_in)) II resend);
Figure 7. Cutpoint localization.
To simplify our proof of property p in Figure 7, we could remove the complex Parity Check logic by introducing a cut in the design (for example, using a cutpoint command). This essentially creates a simplified safe abstraction of the original circuit by allowing par_err to assume any value of error or not error at any point in time. Hence, while proving property p, the formal tool will explore all possible combinations of par_err with respect to the input data bus. This not only simplifies the complexity of the proof, it also creates a strong proof, in that we have proved that property p holds under any potential parity error condition. When we have completed the proof of p, we would then prove that the parity circuit functions correctly as a separate proof.
Cutpoint localization is an effective technique for addressing complexity. Essentially, we simplify the design by cutting out logic that is in the cone-of-influence of a particular property when the logic is really not necessary to prove the correctness of a given property. For example, in Figure 7 we abstracted the complex parity checking logic (via a cutpoint) and essentially replaced it with an input that would generate all possible sequences of active and inactive par_err values during a proof. You can apply cutpoint localization to many types of sub-block functionality, such as embedded storage elements, arithmetic units, error identification and correction circuits, and so forth.
Recommendation 9. For inconclusive proofs, simplify the design complexity by decomposing complex blocks or introducing cutpoint localization abstractions.
Using simulation to aid formal
One challenge encountered when proving properties on today's complex designs is associated with attempting to reach deep states within the design. For example, static formal will often waste many cycles (that is, explore uninteresting states) attempting to reach the high-water mark (that is, a potential corner case) of an embedded FIFO within a design. Obviously, as previously discussed, you can apply manual design abstractions in an attempt to simplify the problem (for example, reduce the FIFO's width and depth). If the properties were previously identified as critical and our verification goal is a complete proof (that fits within our project schedule and resource constraints), then you can apply a manual abstraction to achieve closure.
Figure 8. Static formal vs. dynamic formal.
As an alternative, we can apply dynamic formal verification, which is an example of a process that uses simulation to aid formal verification. That is, dynamic formal verification first uses heuristics to harvest interesting (often deep) states from a simulation run. Then, the set of interesting states are automatically fed into the static formal engines as starting states for formal exploration (see Figure 8). This approach allows the static formal tool to start its exploration around deep states that would otherwise be impossible to reach if a static formal tool started its exploration from an initial reset state without manual abstractions.
Recommendation 10. Apply dynamic formal verification on large or complex designs to improve coverage or hunt for complex bugs too difficult to find (that is, ones that require too much effort) using static formal verification.
Using formal to aid simulation
In section entitled "Verification closure" earlier in this article, we discussed a fairly typical approach used to achieve simulation-based coverage closure. However, in Figure 2, we illustrated the difficulty in achieving coverage closure. As you can see, you will reach a point when you can see that there is no clear return-on-effort in writing directed tests and adjusting random biases to close coverage holes. Hence, you should seek alternative solutions to improve verification confidence.
Formal verification can aid simulation-based coverage closure in three respects:
- Assist simulation by identifying scenarios (input stimuli) that can be used to cover holes.
- Assist simulation by exploring portions of the input space inadvertently missed by a lower-fidelity coverage model.
- Improve confidence on portions of the design associated with a coverage hole by proving properties within a coverage hole region.
You can turn a coverage point not easily hit during simulation into a target (that is, an assertion). Either static or dynamic formal verification can then assist by identifying an input stimulus sequence that would need to be applied in simulation to cover the particular hole.
As we previously discussed, simulation-based coverage metrics measure the quality of the input stimulus' ability to activate (exercise) various structures within a design (for example, line coverage) or expected design behaviors (for example, functional coverage). A lower-fidelity coverage model can result in critical unexplored portions of the input space. However, dynamic formal verification (discussed in the "Using simulation to aid formal") can assist by exploring additional portions of the input space missed by simulation alone.
Finally, we can create a set of properties covering portions of the design that are associated with a coverage hole, and then apply formal verification to cover these properties. Obviously, there is a clear ROI to apply formal on this set of properties (since it was difficult to achieve closure on these associated coverage holes using either directed or random simulation).
This article introduced a process consisting of a set of recommendations you can use to plan for verification closure, which includes simulation and static formal verification. We began by introducing a process to help you identify good candidate blocks for formal verification. We then discussed multiple techniques to achieve verification closure, which include using formal to aid simulation and simulation to aid formal. We introduced various strategies to increase verification confidence on properties deemed too difficult to prove, formal-friendly property coding, and more advanced abstraction techniques to help you achieve full proofs.
By Harry Foster and
Harry Foster is chairman of both the Accellera Formal Verification Committee and the IEEE-1850 PSL Working Group. He is the original developer of the Open Verification Library and author of four technical books (“Principles of RTL Design,” Second Edition, 2001; “Assertion-Based Design,” 2004; “Applied Formal Verification,” 2006; “Creating Assertion-Based IP,” 2008).
Ping Yeung, is a Mentor Graphics' principal engineer, and was part of the 0-In team that developed and introduced assertion-based verification (ABV) to the industry. He has over 16 years' in application development, marketing, and product development experience in the EDA industry, including positions at 0-In, Synopsys, and Mentor Graphics. He holds 5 patents in CDC and formal verification.
 Brooks, Jr., F.P. "No Silver Bullet: Essence and Accidents of Software Engineering," Information Processing. 1986.
 Piziali, A., Functional Verification Coverage Measurement and Analysis, Kluwer Academic Press, 2004.
 Singhal, V., "Coverage: The link between simulation and formal", 2006 talk by Vigyan Singhal, Oski Technologies, http://www.oskitech.com/articles/coverage-0506.pdf
 Foster, H., Loh, L., Bahman R., Singhal. V., "Guidelines for creating a formal verification testplan", In Proc. DVCon, 2006.
 IEEE Standardfor System Verilog: Unified Hardware Design, Specification and Verification Language, IEEE Std. 1800-2005.
 IEEE Standard for Property Specification Language (PSL), IEEE Std. 1850-2005.
 Foster, H., Krolnik, A., Lacey, D., Assertion-Based Design, Kluwer Academic Press, 2003.
 McMillain, K., "Verification of an Implementation of Tomasulo's Algorithm by Compositional Model Checking", In Proc. Computer Aided Verification (CAV), 1998.
 McMillian, K., "Verification of Infinite State Systems by Compositional Model Checking", In Proc. CHARM, 1999.
 Henzinger, T., Qadeer, S., Rajamani, S., "Verifying Sequential Consistency on Shared-memory Multiprocessor Systems", Lecture Notes in Computer Science, 1633:301– 315, 1999.
 Ho, R., "Maximizing Synergies of Assertions and Coverage Points within a Coverage-Driven Verification Methodology", In Proc. Design Con 2005.
Go to the Mentor Graphics Corp. website to learn more.