April 22, 2010 -- Ask any SOC design engineer who wrestles with hardware/ software co-verification and debug issues to name the advantages of emulation. Almost certainly, speed will top the list. But that answer is at best incomplete – mostly because it gives short shrift to the potential of software logic simulation.
At first blush it seems impossible to quibble with emulation as the superior choice for all those who care about speed and efficiency. That's probably nearly everyone developing new SOCs, since design teams are schedule-driven as never before. Most involved in the semiconductor industry now accept as inviolable the need to shorten time-to-market and time-to-volume. And doing the same things faster seems like a sensible choice.
Granted, by the most obvious measure, emulation is orders of magnitude faster than the alternative of software logic simulation. Emulators can deliver hundreds of thousands of cycles per second – performance that matters when gate counts number in the tens of millions or more. Compare that to simulators, which can be as slow as just a few cycles per second.
Yet this measure is not altogether accurate, mostly because it accounts for neither the way emulators are actually used by leading-edge SOC design teams, nor the pressures such teams are under to write and verify increasing amounts of software.
Consider, for example, a semiconductor company that uses emulators for all of its critical software testing. This company is big and works at technology's bleeding edge, so there are always lots of engineers lining up to debug lots of software. Without fail, throughput invariably starts to bog down, a major headache for MBA-types who hate to see expensive emulators fitfully deployed for any length of time.
One software engineer might hold things up for hours as he tries to pinpoint the problems with his code. While this software engineer has stopped the CPU on a breakpoint, the air conditioner and fans in the server room are whirring while everyone else waiting to use the emulator sits twiddling their thumbs. A second developer huddling with a hardware engineer considering another list of bugs might snarl things in the emulation queue for half a day or more before finally announcing that the problem is in the hardware design and not his or her code. This invariably means another hardware design iteration, including the necessary synthesis, and place-and-route steps before the new design can be re-tested.
At this point you might be thinking: "Hey, I thought you said this was a semiconductor company. Who cares about software, especially so early in the hardware design schedule?" That's a fair question – if the year is 1995 or even 2000.
But today, the increasing complexity of SOCs, coupled with need to handle a dizzying array of OEM requirements and end-user demands, means that nearly every SOC manufacturer must ship production software that goes far beyond the bare bones firmware and test code that was the norm a decade ago. When they don't, or when attention to critical hardware/ software interactions wanes, the worst kind of real-world problems ensue.
I know of one SOC manufacturer that was recently horrified to find out that new cell phones based on its chips were spontaneously rebooting daily. Even for the glibbest marketing manager, it's tough to explain this away as a feature. The company scrambled to respond, applying a range of dynamic verification techniques. Eventually, formal verification identified a problem with a control block state machine that would occur just five clocks after reset. This problem almost certainly would have been identified if more attention had been paid to early-stage verification of the blocks underlying the SOC design and the software that interacts with those blocks in the SOC.
The company fixed the issue in its next manufacturing run. But the best it could do for customers who already had the phones with the flawed chips was a software patch that made spontaneous reboots a weekly rather than daily occurrence. This was enough to avoid a recall, but probably not enough to prevent strains in the relationship with the OEM phone manufacturer. These hard feelings may ultimately prove more expensive than the fire drill to find the bug and issue the patch.
Now, back to the company with those emulators. The company posed the question, "Wouldn't it be great if there was an emulator on every engineer's desk? Can't you make one the size of those pizza box computers?"
As it turns out, every engineer already has a computer on their desk. Wouldn't it better to take at least some of the debug activity off the expensive emulators and move it to simulation, which can run on any decent desktop or laptop computer? Almost certainly, this approach would shrink turn-around time for addressing critical early stage defects.
Simulation hardly seems slow compared to the elaborate synthesis and place-and-route processes associated with any tweak to the hardware design, especially when it comes to thorny FPGA prototypes. At an early stage in the design process, the design may not yet be synthesizable. However it can be simulated and some critical functionality involving the interaction of software and hardware can be verified.
Another benefit of simulation is that it generally provides better visibility and control of hardware/ software interactions than does emulation. With a common environment that unifies the HW and SW debug paradigms of waveforms along with processor register and code views, the software and hardware engineers can work together to solve problems. This might well stanch some of the finger pointing and accusations – "You just ran the wrong test case!" – that invariably arise between those who obsess over lines of code and those who worry more about circuits and state machines.
Am I right? Does simulation deserve a second look as a meaningful complement to emulation, especially for early-stage hardware/ software co-verification and debug work on SOC designs?
Perhaps it's an open question whether emulation always wins the race with simulation. Yet it's a near certainty in our industry that opinions on methodology are always speedily extended, excoriated or both. So let me know what you think, either in the comments below or by email. You might change my mind, and even help shape some of the new technology Mentor Graphics is working on in this particular verification niche.
By Marc Bryan.
Marc Bryan is a 24-year veteran of the semiconductor industry who is currently product marketing manager for Mentor Graphics' Codelink products. Bryan came to Mentor after five and a half years with ARM's tool division, where he managed system-level model and debug products for single and multi-core processor-based designs. He holds a B.S. in computer science from the University of Pennsylvania. You can reach Marc at firstname.lastname@example.org.
Go to the Mentor Graphics Corp. website to learn more.