Bridging SOC Architectures for Faster Timing Closure
But given the levels of extreme complexity now on a single chip, and analyzing the directions that both the EDA and IP providers are taking to support each of the methodologies highlighted briefly above, is it time to revisit the basic architectural philosophies used for SOCs? And how does this help the timing-closure problem?
Challenging the basics of SOC architecture, whether you subscribe to the tightly coupled computer bus architecture or the loosely coupled distributed (multi-core) architectures which are now rising in popularity, should be considered at this time because of three industry trends:
While these are business, and not technical trends, they do affect chip design and verification. The more limited capital and labor (in the aftermath of the recession) available to a semiconductor company means increasing difficulties for any given engineering team having to absorb the newer functional complexities and complete a project in a timeframe and at a development cost that can capitalize on a market opportunity with an acceptable return on investment. Simple economics says this is not a winning formula unless something changes.
So what can change? History provides a clue. When the microprocessor was invented, it rose to popularity first in embedded system control applications (long before the PC). As multiprocessor systems emerged, the popular architectures back then divided the system into two distinct pieces, control and data planes. Communications companies followed this philosophy largely for several generations of equipment and infrastructure development as well. Why? Because the approach sufficiently modularized otherwise very complex relationships so that complex systems could be designed and tested within affordable limits. So why doesnít the SOC community now follow this path? Letís take a closer look:
Figure 1 shows a simplified example of a two-way Internet video-conferencing device. Figure 1a represents a conventional multi-core architecture while Figure 1b re-partitions the design in terms of control and data planes.
With the approach in Figure 1b, architects can separate the development of a specific functionís performance from how the overall system manages each of the functions and provides access to system resources. This approach provides the chip designers and software developers with a pre-verified system-management framework for inserting a variety of different functions without the need to completely re-engineer the system interoperability for each specific design.
The data plane in Figure 1b would likely be a high-speed video bus optimized for long data bursts. Since the host processor performs the communications functions, there could also be a second data plane which is latency optimized. But since the system coordination and control is abstracted and is part of a separate control plane (managed by a separate block) each of the data planes can operate within their own means. The top-down system management keeps them coordinated.
One area that may seem to increase in complexity under this architecture is arbitration. But, in effect, arbitration complexity is reduced because the control plane need act only as a switch between the data planes. Remember how straight-forward communications-switching fabrics used to be in the early days of networking? A connection between the control plane block and the arbitration within the data plane provided ample coordination.
The engineering team that's focused on designing a particular function ó video post processing, for example ó now has a system-management framework in which it can operate before it begins its design. This is in sharp contrast to Figure 1a, where the typical multi-core design considers the system management as part of the specific function development. Nothing, of course, is gained for free, and the addition of the system-management block required for Figure 1b is new.
But back to timing closure. The ripple savings effect by way of Figure 1b is most evident at or near top-level integration. Using Figure 1a, where all the functions are tested at the same time (for the first time in many cases) the team begins integration and this is where architecture, design, and software artifacts emerge. Timing-closure challenges rapidly escalate because any changes ripple through the many blocks, and disrupt the overall timing of the chip (not to mention the myriad of software synchronization problems that emerge as well). But with the control plane framework, the Figure 1b approach presents far less of an integration challenge because the system-management task can be modeled and debugged in parallel, if not before, data plane design and verification, thus reducing the chances of artifacts as the specific functions are integrated.
The approach in Figure 1b also offers relief for the EDA and IP offerings. SystemC modeling, for example, can be largely responsible for the baseline control plane development, while RTL and physical design tool chains can be responsible for the specific function development. The integration of the two worlds can occur in a much more straight forward way through mixed mode simulation. As a result, existing tools are more effective.
This article has described, albeit briefly, an out-of-the-box approach to significantly reduce timing-closure challenges by using a revised architecture partitioning. Adding a level of system-management de-coupling, such as ChipStart's new SSM IP, creates opportunities to bridge SOC architectures to incorporate control-plane-like schemes. The bridging then realigns the economics of developing complex SOCs when considering reduced capital and labor mixes. And in the inevitable absence of innovation for flows and IP that we should anticipate as we move forward, this new approach also preserves the utility value of existing tool chains. The system manager block can also be used to virtualize control plane management and synchronize software operation, while interconnect technologies can continue to virtualize data plane management.
By Phil Casini.
Phil Casini is a managing partner for Advance Tech Marketing (ATM), a management, sales enablement, and marketing consulting and training firm. Prior to founding ATM, Phil spent 26 years in high technology companies including Intel and Dallas Semiconductor, with 14 years as an executive at Cirrus Logic, Cradle Technologies and Sonics Inc.
Reprinted from SOCcentral.com, your first stop for ASIC, FPGA, EDA, and IP news and design information.