Page loading . . .

  
 Category: SOCcentral Feature Articles & Columns: Feature Articles: Wednesday, April 23, 2014
Advanced Block-Level Bug-Hunting with Assertion-Based Verification Methodology and Hybrid Formal Verification  
Contributor: Synopsys, Inc.
 Printer friendly
 E-Mail Item URL

June 1, 2005 -- In the past few years, the semiconductor industry has been adding ever larger engineering resources to meet rapidly increasing functional verification challenges. Of particular interest are new verification tools, methodologies and flows that could prove more effective in ensuring correct design functionality, with less effort and in a shorter amount of time.

Assertion-based verification (ABV) is among the most popular methodologies being adopted across the industry. SystemVerilog is increasingly the standard language being deployed by design and verification teams to implement ABV. Specifying assertions as RTL code is being written has tremendous benefits for block-level design and verification, with significant carryover benefits for chip verification as well.

The industry is also witnessing the proliferation of formal analysis tools, some of them strictly static (i.e., no auxiliary use of simulation), some of them more advanced and hybrid in nature (i.e., formal analysis engines coupled with dynamic simulation). Formal analysis provides the means to verify blocks thoroughly and efficiently, resulting in faster chip-level integration and verification.

The objective of this article is to highlight how coupling an ABV methodology with an advanced hybrid formal tool dramatically improves verification productivity and increases confidence in a functionally correct design. The article provides insights to find design bugs more efficiently, increase verification confidence by providing analytical functional coverage data, and maximize the benefits accrued.

Bug-finding technologies

Traditionally, simulation has been the accepted method to verify a design at all levels, from blocks to full chips. There is no doubt that simulation, especially when using advanced testbench techniques such as constrained-random stimulus generation, is indeed very effective. ABV plays an important role here; assertions running in simulation increase design observability and help detect design bugs at their sources.

However, ABV and formal analysis provide a complementary method of verification at the block level. Formal analysis "targets" assertions and tries to either prove them correct or show how they can fail. Formal proofs that assertions are correct are very valuable since they increase confidence in the design. A "counterexample" showing how an assertion can fail is also very valuable, since it indicates either a design bug or an error in the assertion specification.

The more bugs that are detected and fixed at the block level, the easier the chip-level integration and verification. If buggy blocks slip through to the chip verification stage, this greatly complicates the process. The verification engineers need to learn how to manage the design engineers involved in the diagnosis, debug and fix of bugs found at the chip level. It is much better if the designers can find the vast majority of their bugs while developing their blocks.

Bug finding and functional coverage

The ability to find corner-case bugs in a design is directly linked to the ability to exercise the corner cases themselves. With constrained-random testbenches, this exercise occurs by the generation of high-coverage stimuli. With formal analysis, this exercise comes automatically in the process of targeting the assertions. In both cases, functional coverage can be used to provide the metrics to measure the degree of design exercise.

Functional coverage refers to the identification of functionally significant coverage "points" or "objects" that must be exercised during the verification process. These coverage objects can be tracked during simulation to gain insight into how well the design is being verified. In addition, some RTL formal tools such as Synopsys' Magellan™ can actually target coverage points and specifically try to exercise them during the hybrid analysis.

Hybrid formal verification can equally well target corner-case errors (specified by assertions) or legal corner-case coverage points. Common examples of coverage targets include:
  • A combination of design registers
  • A region of the design code
  • A particular design functionality

In its simplest incarnation, a coverage target is a combination of internal design signals.

It is easy to specify coverage targets for metrics such as finite state machine (FSM) state coverage, in which the coverage target is the state vector of the FSM. Similarly, FSM transition coverage can be measured with a coverage target obtained by concatenating the previous state vector with the current state vector, as shown in Figure 1.

Figure 1.



Targeting coverage objects

The ability to exercise a design well requires the ability to target areas of low coverage. There are several issues here. First, the designer needs to determine whether an area of the design is a low-coverage one. Although coverage information may be available to the verification team, this is typically obtained after running millions of simulation cycles at the chip level. The designer needs coverage information at the block level, long before full-chip tests are running.

Assuming that this information is available, the next issue is how the design engineer can guide a tool to exercise these low-coverage areas. The techniques used at the chip level, with verification experts tweaking portions of tests, changing weights in constraints, or adding specific tests to achieve higher coverage, require a great deal of resources and might not be applicable at the block level.

Finally, conclusive functional coverage data is important to determine whether a given verification task (characterized by a coverage target being exercised) can be considered completed or not. For instance, how does a design or verification engineer know that a coverage target will ever be exercised?

Too often, verification engineers spend weeks writing additional tests, despite a "gut feel" that no combination of vectors will ever exercise a particular coverage target. Often these engineers are able to provide convincing data to their management, but this requires lengthy and expensive visual code inspection.

Magellan's hybrid RTL formal architecture is built to address all of the above issues. Because it uses a combination of formal analysis and simulation, it can produce formal proofs and counterexamples while also generating high-coverage stimuli to exercise the design at the block level. The tool enables this methodology without any additional requirements for ABV. Assertions are all that are needed to exploit its Adaptive Coverage Convergence feature, a unique combination of coverage and formal analysis. It enables solid verification results at the block level much earlier than can be obtained by constrained-random simulations at the chip level.

Adaptive Coverage Convergence

When using the Adaptive Coverage Convergence mode, the user is required to specify at least one coverage target. The tool's proprietary algorithms use formal reachability and unreachability engines and user-specified coverage targets to automatically generate stimuli that will exercise them.

In addition to the examples given earlier, coverage targets can also be specified using the SystemVerilog cover property statement. In this case, the verification objective is either to confirm that specific corner case conditions have been exercised, or to prove that such corner case conditions will never occur because they are unreachable.

An example of a coverage target written in SystemVerilog is the following:

property p;
  @(posedge clk) req [1:3] ack;
  endproperty
  c1 :cover property p;
genvar i;
    generate begin
    for (i = 1; i <= 3; i = i+1) begin : loop
     c2: cover property (@(posedge clk) req ##i ack );
    end
    end
    endgenerate

This example shows that SystemVerilog is suitable for expressing coverage objects in which the interrelation between design signals spans multiple clock cycles.

In this case, the design has request (req) and acknowledge (ack) signals that work as "pulses" one clock cycle wide. The coverage object instantiates three instances of the cover property generated by the loop. These instances will be named loop[1].c2, loop[2].c2, and loop[3].c2, matching a specific separation (of 1, 2, or 3 clock cycles, respectively) between the req and ack pulses. The coverage object will thus count all the cases where req and ack are separated by 1, 2 or 3 clock cycles.

SystemVerilog coverage objects can easily address value and data range coverage and corner-case scenarios that the design or verification engineer wants to exercise with dynamic simulation. SystemVerilog can also easily describe:
  • Which bits of one-hot state machines are covered
  • What are the minimum and maximum latencies of a given protocol
  • How many times the FIFO overflowed during simulation
  • What combination of bus arbitration requests occurred

The flexibility of specifying coverage targets gives the user the capability to quickly and easily generate high-coverage stimuli and gives access to a vast spectrum of coverage metrics. State machines registers, circuit status flags, guards in case or if-then-else statements, and inputs to non-synthesizable reference models are all good examples of coverage targets.

Verification completion: formal unreachability analysis

Neither constrained-random nor any other form of simulation by itself provides any information about what coverage targets are actually reachable. Verification engineers may spend weeks trying to reach design states that cannot be exercised by any legal sequence of inputs.

Adaptive Coverage Convergence is coupled with advanced formal engines that perform unreachability analysis on the coverage targets. Once a test is concluded, the user is confident about the completeness of the verification task because reachable, unreachable and unknown targets are clearly enumerated. The verification task is completed in all those cases in which the sum of reachable and unreachable states equals the total number of states expressed by the width of the coverage goal.

It is interesting to note that the coverage targets that hybrid formal analysis reports as unreachable could represent functional design bugs. Most of the time, unreachable functional coverage points are due not to "dead" or redundant code, but rather to a design error that prevents the associated part of the design from being exercised when intended.

Another advantage of the hybrid formal approach is that high coverage-driven simulation allows further analysis for coverage targets for which formal proofs are not found. The stimuli produced by the tool will heavily exercise the design and increase the probability of exposing existing design flaws. This finds deep corner-case bugs faster, especially for high-level properties that would either require manual intervention or - in the case of hybrid verification - a slightly longer runtime.

Bus bridge verification example

One recent Magellan user benchmarked Adaptive Coverage Convergence against pseudo-random simulation techniques on a bus bridge. The bridge included three FSMs, and was about 400K gates in size. By the time the tool was used for this block, the user had already spent three weeks to build the pseudo-random verification environment. After some time, the verification engineer felt comfortable with some portions of the verification task, and with the measurement of metrics such as arc coverage and state coverage for each FSM.

However, the verification engineer still had some doubts. While measuring the "product" coverage of the three FSMs, only 46 states (out of 128 possible) had been reached. As shown in Figure 2, product coverage is a metric representing how well the three state machines are exercised together. It is measured by concatenating all three FSM state vectors together. This metric is particularly meaningful when, given the complexity of the interaction among state machines, any untested combination of states could produce an illegal design behavior, or bug.

Figure 2.



The verification engineer did not know whether he had reached all possible states, whether there were any unreachable combinations of states, or whether he could increase this particular coverage metric. In less than one week, the verification engineer was able to produce the comparison between the traditional pseudo-random and Adaptive Coverage Convergence approaches. The results are shown in Figure 3.

Figure 3.



These results show that pseudo-random testing was effective for coverage metrics such as "FSM state" and "FSM arc." Even though hybrid formal exercised the reachable states in less time and reported unreachable coverage targets, there was no difference in the number of states reached. However, when the verification engineer looked at product coverage, the advantage of Adaptive Coverage Convergence became overwhelming. The tool was able to exercise 60 different coverage targets in less than eight hours, while pseudo-random testing was able to reach only 46, after almost 40 hours. In addition, the tool's proof engines reported that 68 states were unreachable.

The engineer's verification confidence was greatly increased: the tool not only illustrated what combinations of states could be reached, but it also exercised those combinations and provided conclusive information about what combinations were unreachable. After eight hours of work, the engineer knew that no additional verification was required for the particular verification task.

PCI Express verification example: link training status state machine

Another recent project involved a user in a time crunch trying to verify a PCI Express core. The particular block that the user was interested in verifying was the Link Training Status State Machine, whose state diagram is shown in Figure 4.

Figure 4.



For this particular block, the user had neither the time nor resources to build a block-level testbench, and thus, decided to proceed to block integration and chip-level verification. After weeks of full-chip simulation, no bugs were found in this particular block. The user grew concerned. How was it possible that a completely unverified block did not have any bugs? Was it possible that the testbench did not have enough coverage to exercise the innermost portions of the PCI Express core?

When measuring any coverage metrics, any illegal behavior needs to be detected. Having assertions that check the correct behavior of the FSM in place is a key. Figure 5 shows a simplified example of a 4-state FSM to illustrate the verification process used. In this example, there are 16 possible transitions. However, in any real FSM, many of the possible transitions will be unreachable. It will thus be impossible to exercise them.

Figure 5.



Unreachable transitions are unreachable even if the verification performed without using any constraint rules on the design inputs. Constraints, when added, will most likely increase the number of unreachable transitions. Once the unreachable transitions are classified, the next step in this verification task is to determine if there are any illegal transitions that are not design bugs. These illegal transitions need to be eliminated by adding input constraints. As the name implies, constraints limit the input vector space by allowing only legal vector sequences to be applied at the block inputs, thus eliminating "false negatives."

The desirable result of this verification process is that the reachable transitions are all correct, and the unreachable transitions are completely classified, so that the verification engineer knows that the FSM behaves, as far as "state transitions" is concerned, according to the specifications. In the process of adding constraints, the FSM may reveal a functional bug, detected either by manual comparison with an FSM state diagram or by assertions representing the state diagram. Any such bugs must be fixed in order to have a fully verified FSM design. PCI Express verification results

The user chose to use hybrid formal verification, and the process shown in Figure 5, to verify the Link Training Status State Machine block. In addition to the high capacity needed to verify the design's properties, Magellan had the ability to generate high-coverage stimulus and to gather conclusive coverage data for the "FSM Transition Coverage" metric. This choice was particularly appealing, because in this mode the user had to provide only one or more coverage targets to get meaningful results.

As shown in Figure 6, FSM transition coverage metrics were chosen by concatenating the FSM's previous state and current state vectors. The coverage target chosen could be exercised in 4,096 possible ways. In other words, given the coverage target, there were 4,096 possible transitions. This is a huge number of transitions, and clearly many of them may not even be reachable. The user wondered whether any of these were unreachable transitions caused by a design bug and whether it was possible to restrict the number of transitions to a more practical number.

Figure 6.



When FSM transition coverage verification was first performed, the tool reported that there were 4,096 possible transitions. The reason was that one important constraint was not applied to the design: the block had a "test" pin that allowed the FSM to perform any state transition. After constraining this particular input, the tool classified 140 reachable transitions; all other transitions were conclusively classified as unreachable.

The number of reachable transitions was higher than expected; the specifications for this block reported only 34 state transitions to be legal. After constraining the block's input, the number of reachable transitions was reduced to 40. At this point, the designer looked at the results provided by Magellan and at the input stimuli generated by the tool. He soon realized that six of those undesired transitions were due to a complex corner-case bug that had gone undetected after weeks of chip-level simulations.

It took just one afternoon to expose this corner-case bug. There was no need to use a pre-existing testbench, and the constraints needed for this verification task were added incrementally as the verification was performed on the block.

All transitions were conclusively classified. For the particular coverage target (FSM transition coverage) the verification task was complete. The verification engineer knew when verification was complete.

Verification reuse of behavioral reference models

For a new design, a number of behavioral reference models, usually written in C/C++, are frequently available. Often these reference models are never used in an RTL verification environment, especially if they are not at the higher level of abstraction. One way to reuse reference models with hybrid formal analysis is illustrated in Figure 7.

Figure 7.



The reference model can be transaction, instruction, or cycle accurate. Given the level of accuracy, the verification engineer can quickly design a verification comparator that will check the output of the reference model against the output of its RTL implementation.

To complete the environment setup, the reference model and its RTL implementation will have to be exercised with the same stimuli. This is illustrated in the figure.

In the figure, the tool is exercising the "FSM state coverage" metric of the FSM embedded in the model's RTL implementation. This coverage metric is of particular interest because the FSM is not present in the behavioral model, but it is implemented in the RTL first and in silicon later.

The tool is also used to generate very high-coverage stimuli quickly and inexpensively. These stimuli are driven by the user's choice of interesting coverage targets. The approach described eliminates the need of a specialized and non-reusable block-level testbench. Constraints are still needed to eliminate false negatives. The upside is that, once available, constraints enable the effective use of formal analysis.

Conclusions

Higher design complexity requires engineers to find bugs earlier, at the block level, before chip-level integration and verification. A new methodology requiring a higher verification investment at the block level is needed. ABV fits this requirement. In addition, more and more verification literature highlights the high quality of block-level verification that is achieved by including ABV and RTL formal verification early in the functional verification process.

Hybrid RTL formal verification provides the well-known and increasingly accepted model-checking functionality. Additionally, corner-case bugs can be found by mean of Adaptive Coverage Convergence. A chosen coverage metric, such as "cover" coverage or formal "state reachability" intelligently drives the built-in simulator to exercise all the reachable coverage targets, At the same time, as shown in Figure 8, unreachability analysis - while reporting unreachable coverage targets - guarantees the conclusiveness of the verification results for a given block-level verification task.

Figure 8.



There is no reason to try to exercise coverage targets that are unreachable, and the tool clearly reports these because of its built-in, high-capacity formal engines. These same engines guide the built-in simulator to intelligently generate vectors maximizing the achievable coverage. With Magellan, coverage metrics are no longer merely measured, they are achieved.

By Alessandro Fasan, Product Marketing Manager Verification Group, Synopsys, Inc.

Fasan joined Synopsys in 1999 as a verification field engineer supporting Synopsys’ VCS® RTL verification solution and Vera® testbench automation tool. Prior to Synopsys, he worked as a logic designer and verification engineer at STMicroelectronics. He holds a "Laurea in Ingegneria Elettronica" from the University of Padova, Italy.

Go to the Synopsys, Inc. website to learn more.

Keywords: SOCcentral, Synopsys,formal verification, assertions, assertion-based verification, EDA tools,
488/13821 6/6/2005 16196 16196
Add a comment or evaluation (anonymous postings will be deleted)

Designer's Mall
Cinco De Mayo countdown banner
0.890625



 Search for:
            Site       Current Category  
   Search Options


Subscribe to SOCcentral's
SOC Explorer
Newsletter
and receive news, article, whitepaper, and product updates bi-weekly.

Executive
Viewpoint

Progressive Static Verification Leads to Earlier and Faster Timing Sign-off


Graham Bell
VP of Marketing,
Real Intent

Executive
Viewpoint

Threading the Way
through
SOC Verification


Thomas L. Anderson
VP of Marketing,
Breker Verification Systems

Odd Parity

What? You Haven't Made Any Resolutions?


Mike Donlin
The Write Solution

Odd Parity Archive

SOCcentral Job Search

SOC Design
ASIC Design
ASIC Verification
FPGA Design
CPLD Design
PCB Design
DSP Design
RTOS Development
Digital Design

Analog Design
Mixed-Signal Design
DFT
DFM
IC Packaging
VHDL
Verilog
SystemC
SystemVerilog

Special Topics/Feature Articles
3D Integrated Circuits
Analog & Mixed-Signal Design
Design for Manufacturing
Design for Test
DSP in ASICs & FPGAs
ESL Design
Floorplanning & Layout
Formal Verification/OVM/UVM/VMM
Logic & Physical Synthesis
Low-Power Design
MEMS
On-Chip Interconnect
Selecting & Integrating IP
Signal Integrity
SystemC
SystemVerilog
Timing Analysis & Closure
Transaction Level Modeling (TLM)
Verilog
VHDL
 
Design Center
Whitepapers & App Notes
Live and Archived Webcasts
Newsletters



About SOCcentral.com

Sponsorship/Advertising Information

The Home Port  EDA/EDA Tools  FPGAs/PLDs/CPLDs  Intellectual Property  Electronic System Level Design  Special Topics/Feature Articles  Vendor & Organization Directory
News  Major RSS Feeds  Articles Online  Tutorials, White Papers, etc.  Webcasts  Online Resources  Software   Tech Books   Conferences & Seminars  About SOCcentral.com
Copyright 2003-2013  Tech Pro Communications   1209 Colts Circle    Lawrenceville, NJ 08648    Phone: 609-477-6308
553.488  0.96875