Page loading . . .

  
 Category: SOCcentral Feature Articles & Columns: Feature Articles: Tuesday, September 16, 2014
A Practical Approach to Adopting Formal Property Checking  
Contributor: Mentor Graphics Corp.
 Printer friendly
 E-Mail Item URL

February 10, 2010 -- For over a decade, companies such as HP, Intel, IBM and Motorola have been exploring formal property checking of their IC designs. These best-in-class companies have observed that formal property checking, when used in concert with traditional methodologies such as simulation, can lead to a host of verification benefits such as identifying scenarios that can be used to cover holes, proving properties within a coverage hole region and exploring portions of the input space inadvertently missed by lower-fidelity coverage models.

That’s the good news.

Just 37% of the industry has adopted the use of assertions within their simulation-based flows, according to a 2007 study by FarWest Research, funded by Mentor Graphics. And given that assertions form the foundation of formal verification methodologies, it’s perhaps no surprise that a paltry 19% of the industry has adopted formal property checking, according to the same research.

Successful integration of formal properties into a traditional simulation-based flow requires understanding a host of issues, including:

  • When and where to apply formal property checking—and equally important, when and where not to apply it.
  • How to improve simulation coverage closure using formal property checking.
  • How to improve formal property checking closure using simulation.

We accordingly address each of these issues in the sections that follow, which draw on excerpts from Harry Foster's 2009 book "Applied Assertion-Based Verification: An Industry Perspective." Though we'll be skipping much of the academic background here, those looking for a primer on the foundations of assertion languages might consider rousting up this slim 108-page volume, available on Amazon and elsewhere. (Most of the first three chapters are available on Google Books.) Other sources include various Mentor Graphics technical publications, including Foster and Ping Yeung's 2007 whitepaper "Planning Formal Verification Closure."

Getting started

An old proverb states that “he who fails to plan, plans to fail.” Yet in the disciplined, process-oriented world of verification, failure is more likely to stem from confusion about how to best plan the integration of formal propertiy checking into an existing simulation-based flow than from a failure to plan in the first place.

Some projects set the bar too low when first evaluating formal property checking. Engineers might throw a design with a set of ad hoc assertions at a formal property checking tool just to see what value the process has to offer. The problem with this approach is that all assertions are not created equal. Many lower-level assertions, while they are ideal for reducing simulation debugging time, provide little or no value as formal proofs when considered in the context of a project’s overall verification objectives. In such cases, it’s natural – and inaccurate – to declare that formal proofs deliver too little return on the project team’s investment.

Projects teams’ first attempts at formal property checking can just as easily fail due to overreach, particularly when the team’s ambition far surpasses its skill set. An inexperienced project team that selects a complex design block beyond the push-button capability of today’s formal verification technology is likely to quickly get stuck until they acquire the advanced skills required to manually assist in completing the proof. (This problem is not unique to formal property checking. Consider the likely outcome when a team that lacks object-oriented programming savvy first attempts to construct a contemporary constrained-random, coverage-driven testbench.)

Of course many design blocks do lend themselves to formal property checking and require minimum or no advanced skills. In the following section, we outline a test-planning process that helps to identify such blocks and begin down the path of formal property checking in a way that nurtures the organization’s current skill set.

Pick suitable design blocks, set your goals, then build a testplanning process

Turing Award winner Fred Brooks once quipped that “even the best planning is not so omniscient as to get it right the first time.” Notwithstanding Brooks’ wisdom, there are a few preliminary steps which, if followe, help to build a good test plan. First among these is identifying the design blocks that are most suitable for formal verification in the first place. The key criterion for choosing design blocks suitable for formal is whether the block is mostly sequential (that is, non-concurrent) or mostly concurrent.

Sequential design blocks (Figure 1) typically operate on a single stream of input data, even though there may be multiple packets at various stages of the design pipeline at any instant. An example of this sequential behavior is an instruction decode unit that decodes a processor instruction over many stages. Another example is an MPEG encoder block that encodes a stream of video data. Formal verification usually faces state explosion for sequential designs because, in general, the most interesting properties involve a majority of the flops within the design block.

Figure 1. Sequential dataflow paths.


Concurrent design blocks (Figure 2) deal with multiple streams of input data that collide with each other. An example is a multi-channel bus bridge block, which essentially transports packets unchanged from multiple input sources to multiple output sources.

Figure 2. Concurrent dataflow paths.


The following coarse characterization often helps in determining whether formal is suitable. We can generally characterize design blocks as control- or datapath-oriented. We can further characterize datapath design blocks as either data transport or data transform (Figure 3).

Figure 3. Data verification flow.


A multi-channel bus bridge block is also an example of a data transport block. Data transform blocks perform a mathematical computation or an algorithm over a stream of input values (for example, an encoder block or a floating point unit). As a rule of thumb, when applying formal, choose blocks that are control-oriented or perform data transport with high concurrency.

Now, which candidate blocks are easy and require no (or minimal formal skills), and which candidate blocks are difficult and require more advanced skills and additional manual work to complete the proof? In Table 1 we attempt to answer these questions, listing a broad class of design blocks. Our common sense advice; if your organization has no prior formal experience, then start with a candidate block that requires minimal skills and gradually work to grow the organization’s skill set over time.

Table 1. Formal skills required to handle various candidate
design blocks.


Preliminary test plan work involves more than just identifying the most amenable design block, especially since there is a range of formal verification strategies to choose from. (We’ll get to these shortly.) However, prior to choosing a strategy, we recommend that you classify the key properties by answering the following questions:

  • Did a respin occur on a previous project for a similar property? (high ROI)
  • Is the verification team concerned about achieving high coverage in simulation for a particular property? (high ROI)
  • Is the property control-intensive? (high likelihood of success)
  • Is there sufficient access to the design team to help define constraints for a particular property? (high likelihood of success)

After ordering your list, assign an appropriate strategy for each property in the list based on your project’s schedule and resource constraints. Your verification goals, project schedule, and resource constraints influence the strategy you select. We recommend you choose a strategy from the following:

  • Full proof. Projects often have many properties in the list that are of critical importance and concern. For example, to ensure that the design is not dead in the lab, there are certain properties that absolutely must be error-free. These properties warrant applying the appropriate resources to achieve a full proof.
  • Bug-hunting. Using formal verification is not limited to full proofs. In fact, you can effectively use formal verification as a bug-hunting technique, often uncovering complex corner cases missed by simulation. The two main bug-hunting techniques are bounded model checking, where we prove that a set of assertions is safe out to some bounded sequential depth, and dynamic formal, which combines simulation and formal verification to reach deep complex states.
  • Interface formalization. The goal here is to harden your design’s interface implementation using formal verification prior to integrating blocks into the system simulation environment. In other words, your focus is purely on the design’s interface (versus a focus on internal assertions or block-level, end-to-end properties). The benefit of interface formalization is that you can reuse your interface assertions and assumptions during system-level simulation to dramatically reduce integration debugging time.
  • Improved coverage. Creating a high-fidelity coverage model can be a challenge in a traditional simulation environment. If a corner case or complex behavior is missing from the coverage model, then it is likely that behaviors of the design will go untested. However, dynamic formal is an excellent way to leverage an existing coverage model to explore complex behaviors around interesting coverage points. The overall benefits are improved coverage and the ability to find bugs that are more complex.

With pre-work done on identifying design blocks and appropriate strategies, test planning can finally begin. This can be tricky given that formal verification often requires introduction of advanced abstractions to get the set of assertions to converge during a formal proof. Still, any test plan can be boiled down to one overarching goal — create reusable verification components for simulation. Though there are countless variations, here’s one five-step process grounded in our work and that of Indian Institute of Technology’s Pallab Dasgupta and others that may serve as a good starting point in building a plan that achieves this objective:

  1. Create a block diagram and interface description. Create a block diagram and table that describe the details for the design’s design component interface signals that must be referenced (monitored) when creating the set of assertions and coverage items. Use this list to determine completeness of the requirement checklist during the review process.
  2. Create an overview description. Briefly describe the key characteristics of the design’s design component. It is not necessary to make the introduction highly detailed, but it should highlight the major functions and features. Waveform diagrams are useful for describing temporal relationships for temporal signals.
  3. Create a natural language list of properties. In a natural language, list all properties for the design’s design component. A recommended approach is to create a table to capture the list of properties. For each property, use a unique label identifier that helps map the assertions back to the natural language properties.
  4. Convert natural language properties into formal properties. Convert each of the natural language properties into a set of SystemVerilog Assertions or PSL assertions or coverage properties, using any additional modeling required for describing the intended behavior.
  5. Encapsulate assertions inside a module or interface. In this step, turn the set of related properties into a reusable verification component (an assertion-based monitor). To achieve this goal, analysis ports are added to the monitor for communication with other simulation verification components, providing a clear separation between assertion detection and simulation environment action.

Achieving formal proof closure

Formal verification technology has matured to such an extent that much now can be completed automatically thanks to under-the-hood enhancements of various verification tools and technologies. Nonetheless, as we pointed out in Table 1, there are classes of designs whose complexity is beyond the reach of any of these tools. Hence, achieving formal proof closure on these complex designs requires some formal skills and manual effort.

Deciding which technique can best address complexity is often dependent on the specific design, but several general concepts are more or less universal. Beginning to grasp these concepts is arguably the first step to achieving the full benefit of formal verification processes.

The first technique is to avoid creating overly complex assertions, which contribute to state explosion during static formal verification or detrimentally impact simulation performance. When possible, it is better to partition a complex assertion into a set of simpler assertions.

The second technique is to use formal assumptions (often referred to as constraints) in a balanced way. Assumptions can both help and hinder the performance of the static formal verification tool. For example, complex temporal assumptions will actually increase the proof complexity, which can yield inconclusive results. And too many assumptions potentially result in over constraining the design. To avoid this, validate your assumptions in simulation to identify proofs that were potentially over constrained.

Assumptions are necessary to eliminate false counter-examples (which prove a property is false due to incorrect interactions with the environment). In general, static formal tools perform best with a minimal number of simple assumptions. So begin your proofs without any constraints, and then add constraints incrementally to eliminate counterexamples.

The third approach is to manage complexity through use of strict mode. This is a special type of assumption that limits the formal verification tool to exploring a restricted subset of design behaviors. For example, a strict mode restriction may reduce a large set of opcodes to a smaller set of opcodes during a proof. Or a strict mode restriction may limit the input behavior to only read transactions during one phase of a proof, and then separately re-prove the design for write transactions.

Lastly, techniques used to reduce complexity are critical to achieving proof closure, such as decomposition. One form of decomposition is known as assume-guarantee. For this form of decomposition, unlike general decomposition, we add assumptions to the input of a decomposed block and use them to prove properties on that block. Next, we convert the assumptions to assertions, which we then prove on the neighboring block (Figure 4).

Figure 4. Assume-guarantee decomposition.


Another form of fine-grained decomposition is known as localization. To demonstrate this concept, consider the design illustrated in Figure 5.

property p;
@(posedge clk)
wr_en |=> ((data_out == $past(data_in)) || resend);
endproperty

Figure 5. Cutpoint localization.


To simplify our proof of property p in Figure 5, we could remove the complex Parity Check logic by introducing a cut in the design (for example, using a cutpoint command). This essentially creates a simplified safe abstraction of the original circuit by allowing par_err to assume any value of error or no error at any point in time. Hence, while proving property p, the formal tool will explore all possible combinations of par_err with respect to the input data bus. This not only simplifies the complexity of the proof, it also creates a strong proof, in that we have proved that property p holds under any potential parity error condition. When we have completed the proof of p, we would then prove that the parity circuit functions correctly as a separate proof.

Cutpoint localization is an effective technique for addressing complexity. Essentially, we simplify the design by cutting out logic that is in the cone-of-influence of a particular property when the logic is really not necessary to prove the correctness of a given property. For example, in Figure 5 we abstracted the complex parity checking logic (via a cutpoint) and essentially replaced it with an input that would generate all possible sequences of active and inactive par_err values during a proof. You can apply cutpoint localization to many types of sub-block functionality, such as embedded storage elements, arithmetic units, error identification and correction circuits, and so forth.

Static + dynamic formal; simulation + formal verification

One challenge of proving properties on today’s complex designs is reaching deep states within the design. For example, static formal will often waste many cycles exploring uninteresting states while attempting to reach a potential corner case of a design’s embedded FIFO. Obviously, you can apply manual design abstractions – for example, reduce the FIFO’s width and depth – in an attempt to simplify the problem. If the properties were previously identified as critical and our verification goal is a complete proof (that fits within our project schedule and resource constraints), then you can apply a manual abstraction to achieve closure.

Figure 6. Static formal vs. dynamic formal.


As an alternative, we can apply dynamic formal verification, which is an example of a process that uses simulation to aid formal verification. That is, dynamic formal verification first uses heuristics to harvest interesting and often deep states from a simulation run. Then, the set of interesting states are automatically fed into the static formal engines as starting states for formal exploration (Figure 6). This approach allows the static formal tool to start its exploration around deep states that would otherwise be impossible to reach if a static formal tool started its exploration from an initial reset state without manual abstractions.

Figure 7 illustrates the difficulty in achieving coverage closure. As you can see, you will reach a point when you can see that there is no clear return-on-effort in writing directed tests and adjusting random biases to close coverage holes. Hence, you should seek alternative solutions to improve verification confidence.

Figure 7. Functional coverage input space.


Formal verification can aid simulation-based coverage closure by:

  • Assisting simulation by identifying scenarios (input stimuli) that can be used to cover holes,
  • Assisting simulation by exploring portions of the input space inadvertently missed by a lower-fidelity coverage model and
  • Improving confidence on portions of the design associated with a coverage hole by proving properties within a coverage hole region.

You can turn a coverage point not easily hit during simulation into a target, or rather, an assertion. Either static or dynamic formal verification can then assist by identifying an input stimulus sequence that would need to be applied in simulation to cover the particular hole.

Simulation-based coverage metrics measure the quality of the input stimulus’ ability to activate various structures within a design or expected design behaviors, such as line or functional coverage. A lower-fidelity coverage model can result in critical unexplored portions of the input space. However, dynamic formal verification can assist by exploring additional portions of the input space missed by simulation alone.

Finally, we can create a set of properties covering portions of the design that are associated with a coverage hole, and then apply formal verification to cover these properties. Obviously, since it was difficult to achieve closure on these associated coverage holes using either directed or random simulation, there is a clear ROI to apply formal on this set of properties.

Of course difficulties in design go far beyond just achieving coverage closure. Indeed the idea encapsulated in Fred Brooks’ 1986 landmark article “No Silver Bullet: Essence and Accidents of Software Engineering” is as true today as ever, including in hardware verification. Despite the rise of reusable IP, there’s just too much complexity for any one methodology to suffice.

Yet even though silver bullets remain elusive, integrated approaches that include formal verification can yield real results. See the references below for several industry examples of success in incorporating assertions, the lynch-pin to formal verification. And feel free to engage us in dialog via the Verification Horizons blog or by following Harry Foster on Twitter.

By Harry Foster and

Ping Yeung

Harry Foster is chairman of both the Accellera Formal Verification Committee and the IEEE-1850 PSL Working Group. He is the original developer of the Open Verification Library and author of four technical books (“Principles of RTL Design,” Second Edition, 2001; “Assertion-Based Design,” 2004; “Applied Formal Verification,” 2006; “Creating Assertion-Based IP,” 2008). The article draws on his forthcoming monograph, “Applied Assertion-Based Verification,” which was published in 2009.

Ping Yeung, a Mentor Graphics’ principal engineer, delivered an invited paper on this topic at the International SoC Design Conference, held Nov. 22-24 in Busan, South Korea and organized in part by IEEE. Yeung was part of the 0-In team that developed and introduced assertion-based verification (ABV) to the industry. He has over 16 years’ in application development, marketing, and product development experience in the EDA industry, including positions at 0-In, Synopsys, and Mentor Graphics. He holds 5 patents in CDC and formal verification.

References

Y. Abarbanel, I. Beer, L. Gluhovsky, S. Keidar, Y. Wolfsthal, “FoCs—Automatic Generation of Simulation Checkers from Formal Specifications,” Proc. 12th International Conference Computer Aided Verification, pp. 414-427 (2000)

H. Foster, C. Coelho, “Assertions Targeting A Diverse Set of Verification Tools,” In Proceedings of International HDL Conference, (2001)

H. Foster, A. Krolnik, D. Lacey, Assertion-Based Design, 2nd Edition, Kluwer Academic Publishers, (2004)

H. Foster, K. Larsen, M. Turpin, “Introducing The New Accellera Open Verification Library Standard,” In Proceedings of DVCon, (2006)

M. Kantrowitz, L. Noack, “I’m done Simulating; Now What? Verification Coverage Analysis and Correctness Checking of the DECchip 21164 Alpha microprocessor,” Proc. Design Automation Conference, pp. 325-330, (1996)

Krolnik, Cyrix M3 Phase 1 Report, Cyrix Inc. internal report, (1998)

B. Turumella, M. Sharma, M., “Assertion-based verification of a 32 thread SPARC CMT microprocessor,” In Proceedings of the 45th Design Automation Conference, DAC 2008, pp. 256-261, (2008)

Go to the Mentor Graphics Corp. website to learn more.

Keywords: ASICs, ASIC design, FPGAs, field programmable gate arrays, FPGA design, EDA, EDA tools, electronic design automation, formal verification, electronic system level design, ESL, assertion based verification, assertion-based verification, ABV, assertions, Mentor Graphics,
488/30681 2/10/2010 6868 6868
Add a comment or evaluation (anonymous postings will be deleted)

Designer's Mall
Halloween countdown banner
0.9375



 Search for:
            Site       Current Category  
   Search Options

Subscribe to SOCcentral's
SOC Explorer
Newsletter
and receive news, article, whitepaper, and product updates bi-weekly.

Executive
Viewpoint

Deep Semantic and Formal Analysis


Dr. Pranav Ashar
CTO, Real Intent

Odd Parity

Summertime and the Leavin' Ain't Easy


Mike Donlin
The Write Solution

Odd Parity Archive

SOCcentral Job Search

SOC Design
ASIC Design
ASIC Verification
FPGA Design
CPLD Design
PCB Design
DSP Design
RTOS Development
Digital Design

Analog Design
Mixed-Signal Design
DFT
DFM
IC Packaging
VHDL
Verilog
SystemC
SystemVerilog

Special Topics/Feature Articles
3D Integrated Circuits
Analog & Mixed-Signal Design
Design for Manufacturing
Design for Test
DSP in ASICs & FPGAs
ESL Design
Floorplanning & Layout
Formal Verification/OVM/UVM/VMM
Logic & Physical Synthesis
Low-Power Design
MEMS
On-Chip Interconnect
Selecting & Integrating IP
Signal Integrity
SystemC
SystemVerilog
Timing Analysis & Closure
Transaction Level Modeling (TLM)
Verilog
VHDL
 
Design Center
Whitepapers & App Notes
Live and Archived Webcasts
Newsletters



About SOCcentral.com

Sponsorship/Advertising Information

The Home Port  EDA/EDA Tools  FPGAs/PLDs/CPLDs  Intellectual Property  Electronic System Level Design  Special Topics/Feature Articles  Vendor & Organization Directory
News  Major RSS Feeds  Articles Online  Tutorials, White Papers, etc.  Webcasts  Online Resources  Software   Tech Books   Conferences & Seminars  About SOCcentral.com
Copyright 2003-2013  Tech Pro Communications   1209 Colts Circle    Lawrenceville, NJ 08648    Phone: 609-477-6308
553.488  1