Extending the Metric-Driven Verification Methodology to TLM
Contributor: Cadence Design Systems, Inc.
March 30, 2012 -- The electronic design industry has experienced quantum leaps in productivity every time that the abstraction level of design has been raised, as was the case when RTL synthesis enabled verification to move from the gate level up to the register-transfer level (RTL). Design, of course, benefited in the move to higher abstraction, but the biggest benefit occurred with verification, in terms of faster simulation and easier debug. Today we are experiencing a similar shift.
The use of functional virtual prototypes is gaining acceptance because it delivers an early model of the hardware to software developers, which lets them begin their development-debug cycle much earlier. High-level synthesis (HLS) is gaining acceptance because it lets hardware designers describe their designs at a higher level of abstraction than RTL. This means fewer lines of code to write, easier and rapid exploration of micro-architecture options, and easier re-targeting of IP for different end products. But the largest benefit is faster verification turn-around, because verification continues to be the critical path of most design projects.
By starting with higher levels of abstraction, the verification environment can be built and tested long before the detailed design architecture and micro-architecture are designed and encoded into RTL. And of course, the higher level of abstraction means fewer events to simulate and fewer lines of code to sift through to debug failures.
At the same time, metric-driven verification (MDV) has established itself as a powerful approach to verification, beginning with RTL. By planning the verification process with clearly defined metrics and tracking progress toward those goals, the MDV approach reaches verification closure more efficiently and with a higher level of confidence.
It is logical, therefore, to extend this MDV approach up in abstraction so that verification closure can begin earlier and proceed more rapidly. By starting the verification process with higher-level models, the bulk of the core functionality can be verified before RTL even exists. This way, when RTL is created, verification can then just focus on the design decisions that are captured with the detail that comes with RTL. We see increasing demands in the industry for this extended MDV approach, and in the last few years have put together coherent solutions to realize the verification methodology through strategic collaboration with multiple customers.
In this article, we describe one such solution addressing hardware IP verification. The primary users of this solution are the verification teams for hardware IP design divisions, who provide a sign-off procedure for the IP. This article first observes the types of models of such IP that are often written at abstraction levels higher than RTL and are useful for verification. It then presents how the existing verification methodology at RTL can be naturally extended to start the verification work early using those models. We also describe which aspects of verification should be focused on at each abstraction level; how the testbench should be refined so that verification assets built at higher levels of abstraction can be effectively reused at lower levels; and finally, how the verification features are defined and managed across multiple levels of abstraction.
Models of IP across multiple abstraction levels
Moving design up in abstraction greatly increases overall verification throughput, but the lower levels of abstraction still need to be verified. A multi-level design and verification approach enables the capture of specific aspects of the design at each level of abstraction, adding refinement in subsequent lower levels. Consequently, it naturally defines focal aspects of verification at each abstraction level. Verification engineers focus on those aspects in developing the verification environment and verifying the target design model at each level. This separation of concerns in the verification focus across multiple abstraction levels is the fundamental vehicle to establish a scalable verification methodology.
We define the abstraction levels as follows:
Early testbench development and debugging
In most flows today, where an algorithm is developed in C/C++ and RTL is created by hand, there is very little commonality between the test mechanisms for each. Typically, the C/C++ model is compiled with some basic tests that are also written in C/C++. Meanwhile, robust RTL verification testbenches utilize high-level verification languages such as e or SystemVerilog. RTL verification will typically utilize the C/C++ model as the golden model against which to compare results, but there is essentially no other commonality or reuse. This creates multiple problems that, together, significantly impact verification productivity.
First, verification of the golden model is very limited. Because of this, verification engineers often find mismatches in verifying the RTL against the golden model, only to find that the golden model is incorrect.
Second, the testbench is built only for the RTL. This causes a delay in testbench development, since RTL is either not developed or continues to change until a late stage in the design process. As a result, debugging of the testbench is often carried out while verifying the RTL, and this complicates root-cause analysis when incorrect behaviors are observed.
Furthermore, since RTL is the only stage where verification is applied, all kinds of design decisions captured in the RTL model are verified at this stage. Some of those decisions, such as the underlying algorithms or the global architecture structures, are actually made much earlier in the design process. Therefore, if problems are identified during RTL verification, it is sometimes necessary to go back and change those early-stage design decisions. Those changes could require further analysis and validation of other design decisions made in the subsequent stages. This causes a long chain of bug fixes and validation, which could have been identified sooner if verification were done at the same stage where the design decisions are made.
To address these issues, the multi-level metric-driven verification flow uses a common testbench environment for all levels of abstraction, from TLM to RTL. This enables earlier development of the testbench and verification of design decisions at the individual stages. In addition, the testbench verifies the golden reference model more thoroughly, increasing confidence at later stages when the design is verified against it.
The multi-level verification flow also enables testing of the testbench itself. This is a significant effort in any verification project, which typically involves defining the features to be tested (captured in the verification plan), constructing the verification components, building a library of tests, adding coverage and assertions, and mapping this all to the verification plan for tracking. The ability to develop and debug the testbench earlier in the flow means that the march toward verification closure can begin that much earlier.
The metrics used in verification — coverage, checks, and assertions — are tracked at all the stages so that when functionality is verified at the TLM stage, that information is annotated into the verification plan. As the design is refined to the signal level, and through HLS into RTL, the already-verified functionality just needs to be regressed to ensure that it still works properly. Verification will only need to focus on the newly introduced details at each stage, which greatly increases efficiency.
The testbench itself should comprise a library of verification components that can be assembled quickly into a test environment as well as reused for other designs. The libraries should be packaged with a flow that can be tailored to individual use cases. For example, when performing early unit testing, fixed test patterns that target specific functionality can be applied. These fixed tests can be written in SystemC, e, or SystemVerilog. Dynamic test pattern generation based on the progress toward the verification plan is constructed in e or SystemVerilog, and can be reused throughout the multi-level verification plan.
The Metric-Driven Verification approach
The basic concept behind the metric-driven verification (MDV) approach is that the verification process begins with an overall plan that captures all of the verification tasks, metrics, and criteria to be used. Progress is continually tracked and adjustments are made to achieve those goals more efficiently. It is an "executable plan," as it is linked to a capability that can execute tests and manage the results using various verification techniques. In short, the MDV approach converts what used to be an open-loop process into a closed-loop process, as shown in Figure 1.
Fortunately, rather than requiring that every verification team re-invent a verification environment, the industry has come together to define a standard verification framework called the Universal Verification Methodology (UVM). The UVM integrates naturally with the MDV approach and supports multiple verification languages. The core building blocks of the methodology are UVM Verification Components (UVCs).
UVCs are reusable components that connect to the design in a modular manner, and are hierarchically composed to create a testbench based on the structure of the design model. In our multi-level verification solution, we develop UVCs for higher levels of abstraction than RTL and effectively use its composition capability to refine the verification environment gradually.
Step 1 in the multi-level MDV methodology: Create the verification plan
The verification process begins with the creation of the verification plan. This can be done directly within software such as Cadence® Incisive® Enterprise Manager, or in a Word document that can be read-in by this management software.
The plan should start by identifying all features that should be verified. The features are categorized according to their type — interface behaviors for the inputs and outputs, the design-under-test (DUT) black-box functionality, or white-box verification of the internal behavior of the DUT. This approach is the same as the standard metric-driven verification methodology used for RTL today.
Then, each feature is mapped to the highest level of abstraction where it can be verified. For example, a timing-dependent bus protocol handshake cannot be verified if the DUT is un-timed. We therefore examine which levels of abstraction capture all the timing information necessary to verify that feature in the bus protocol. We then associate the feature with the highest of such abstraction levels. We make this association without moving the feature from one abstraction level to another level.
Instead, we create an attribute that defines the abstraction levels at which the target feature is verified, and we annotate this attribute with each feature, specifying the required abstraction levels. We then use the filtering and sorting mechanisms provided in the verification management tool to classify the features based on the abstraction levels, as needed. The same test may be run at multiple abstraction levels, either to regress previously verified functionality, or with different goals appropriate for that level of abstraction.
Figure 2 illustrates a verification plan in which the features are sorted by the abstraction levels. Notice the attribute "RELEVANT_FOR_ABSTRACTION_LEVEL," which defines the abstraction levels where the feature will be verified.
Step 2 in the multi-level MDV methodology: Functional virtual prototype verification
Because the functional virtual prototype is a high-level description of the algorithm with register and interface views required by the software, verification at this stage will naturally focus on the functionality of the algorithm and the correct register usage according to specification. Also, because this is the first stage where the verification environment is brought-up, this is typically where we define the architecture of the testbench and implement its components necessary to verify the features associated with this stage.
The advantage of developing a verification environment at this stage is not only to verify this high-level design model, but also to verify the testbench itself. The multi-level MDV methodology enables the verification components in the testbench to be reused throughout the lower levels of abstraction. In developing this testbench, the verification team can test and debug these components to ensure the correctness of sequences used to stress the design model and the mechanisms for monitoring and checking the functionality. This is done by actually simulating the testbench with the high-level design model, before RTL becomes available. This increases the verification team's confidence in the correctness of the testbench when it is used to verify the RTL at a later stage of the verification process.
The DUT at this stage typically utilizes TLM 2.0 socket interfaces for on-chip bus communication, where data types are based on the standard TLM generic payload. We developed a UVC specifically designed for the generic payload data type to quickly compose the testbench. This UVC defines the standard components (such as the sequencer, drivers, monitors, and scoreboard) with basic functionalities tailored for this data type. The user can extend these functionalities as needed, using the same extension mechanisms provided in the UVM.
The registers of the DUT at this level can be stressed and verified in the same way as the RTL DUT. In fact, the standard register verification package (such as vr_ad in the e language) can be used to define, stress, and monitor the registers in the testbench.
Verification at this stage should be robust and thorough, focusing on verifying the core functionality of the DUT with the registers. The goal should be to identify as many functional bugs as possible, along with validating the algorithm. Because this is such a high level of abstraction, simulation runs very quickly and it is very easy to identify the source of bugs.
Step 3 in the multi-level MDV methodology: HLS-ready verification
As the previous stage's verification process proceeds, the design team is refining the functional-virtual prototype toward a hardware architecture. So, the details newly added at this stage are the functional partitions for parallelism, the hardware data types, static arrays for memory, and the communication interfaces.
As the communication interfaces of the design model get refined, the testbench also needs to be refined, minimizing changes to the original testbench used at the previous stage. This can be accomplished by layering, which is a technique used to compose multiple UVCs hierarchically. In the RTL verification flow, layering is used when a testbench is integrated into another testbench for a higher level of design hierarchy. The same mechanism is used here, but this time across multiple levels of abstraction.
Figure 3 illustrates the layering approach. Here, the module-level component in the testbench was used at the previous stage, where the communication interfaces are modeled with TLM 2.0 and the TLM generic payload data type.
This component is used without changes at this stage. However, a new UVC is layered underneath to accommodate the gap between the original UVC and the newly refined interfaces of the DUT (see "Agents for refined interfaces" in Figure 3). This UVC receives sequences from the original UVC as TLM 2.0 transactions, and then decomposes the transactions and the data type according to the refined interfaces given for the DUT. Similarly, the refined transactions produced from the DUT are aggregated to TLM 2.0 transactions, which are then passed to the upper UVC for monitoring and scoreboarding. As depicted in Figure 3, the scoreboard may use a golden reference model, which can be the algorithm model used as the DUT in the previous stage.
The verification focus at this stage is to ensure that the design functions properly with the newly introduced hardware architecture and protocol awareness. These additional verification features often require new test scenarios, new checkers, and coverage models. The verification team adds those capabilities in the testbench, validates the correctness of the enhanced testbench, and then verifies the design model.
Note that not all the interfaces of the DUT are modeled with TLM. For example, the control and reset signals of the design might have been abstracted away at the functional virtual prototype stage, but they are defined at this stage to reflect the hardware implementation. These will be modeled with the signal data type of SystemC rather than as TLM interfaces. To connect the testbench to the DUT through those refined interfaces, we use the same layering approach, where the module-level verification component is still reused while the data and controls are refined within the layered UVCs based on the corresponding interfaces. To verify these new interfaces, we introduce new sequences and checkers in the UVCs layered for the specific interfaces. If the verification features are not localized to particular interfaces, then new sequences and monitoring mechanisms might be added at the module component level. Timing-independent reset verification is such an example, and is often conducted at this stage.
Step 4 in the multi-level MDV methodology: RTL verification
High-level synthesis (HLS) creates the RTL implementation of the design—the micro-architecture, data operation resources, control logic, and storage elements. This changes the timing properties of the design as compared to the previous stage. Since the core functionality should already have been verified at the HLS-ready stage, that functionality only needs to be regressed at RTL to ensure that no functionality changed as a result of the new timing properties. Verification at this stage thus can focus on the micro-architecture, timing-dependent features, and signal-level protocol at the I/O interfaces.
From RTL through the rest of implementation, logic equivalence checking can verify that the function of the logic between registers does not change as it moves to gates and is optimized during physical implementation. Unfortunately, there is not a similarly successful exhaustive technology to verify that the functionality of the RTL matches the functionality of the HLS-ready design. This is due to the fact that HLS is inserting the very state elements that static formal checking relies upon to break the problem into solvable pieces.
Sequential logic equivalence checking technology is available and can supplement the RTL verification effort, but it cannot perform the entire task. This type of checking can be run in two modes: a partial proof that can find bugs but cannot exhaustively prove equivalence, or a full proof that can exhaustively prove equivalence for small blocks if hints are provided in the form of mapping between the two designs. While this technology is useful as part of the RTL verification approach, it does not replace the need to perform RTL simulation. Because RTL verification is still required in the multi-level flow, the metric-driven approach is crucial. By tracking what features have already been verified and only running a subset of tests to regress those features at RTL, it minimizes the amount of verification required at RTL. Thus, the bulk of verification is performed at the higher levels of abstraction where throughput is much higher and debug is much easier, reducing the overall turnaround of the verification process. Figure 4 shows a verification plan for a multi-level flow with progress toward completion for each feature at each level annotated. It also highlights detailed functional coverage metrics for a specific feature in white-box testing at the functional virtual prototype level.
One might assume that once the RTL DUT becomes available, the testbench and the design models of the higher levels of abstraction would no longer be necessary. This is often not the case. In fact, the verification team still effectively uses those higher-level models and the testbench while verifying the RTL. A typical case would be when a new configuration of the testbench sequences is explored to stress particular corner cases that become necessary to test in the design model.
Even if the target DUT is now at RTL, experiments can often be done with the higher-level models to explore test scenarios and specifics of constraints to be imposed to the random sequence generators. Sometimes it is even possible to integrate coverage models and write checkers using the higher-level models, which are then reused at RTL. Since the simulation runs much faster than it does at RTL, more exploration and tuning of the testbench can be done effectively using those higher levels of abstraction.
As the verification challenge has grown exponentially, it has become the bottleneck for bringing SOCs to market. The problem has become so acute that the progress of innovation has slowed in favor of reusing legacy IP blocks designed for previous generations of hardware and software.
Just as the move in abstraction from the gate level to the register-transfer level enabled a quantum leap in verification productivity, the time for the next quantum leap is here. SystemC transaction-level modeling increases design productivity — fewer lines of code are required to describe hardware, designers can rapidly explore different micro-architecture tradeoffs, and re-targeting to different application requirements and different process nodes is automatic. But most importantly, this higher-level abstraction of the design increases overall verification turn-around. With fewer lines of code comes fewer bugs, simulating at a higher level of abstraction speeds run-times, and searching for the cause of bugs is much easier at higher levels of abstraction.
To reduce overall verification turnaround, this higher-abstraction verification must be part of a focused metric-driven approach to what is a multi-level challenge. This starts with a plan that details what needs to be verified, and identifies at what level each feature is first verified so that a design decision is verified as soon as it is made. It then reuses the test environment throughout the flow, and this environment reports back the results to the plan. Additionally, this test environment can be reused when the IP plugs into the SOC context, where verification will also focus on the software view, mix abstraction levels of IP models, and mix IP/subsystem execution platforms.
By utilizing this multi-abstraction-level approach, most of the verification effort can be focused early in the process where it is more efficient. Only newly introduced details are verified at the more detailed levels, with everything else being simply regressed. By applying this smart focus to verification, the process of closing on your verification goals becomes much more predictable and contained.
By Yosinori Watanabe and Jack Erickson
Yosinori Watanabe is a Senior Architect and Jack Erickson is Product Marketing Director, Cadence Design Systems, Inc.
Reprinted from SOCcentral.com, your first stop for ASIC, FPGA, EDA, and IP news and design information.