In 1965, Gordon Moore prophesized that integrated circuit density would double roughly every one to two years. The universal acceptance and relentless tracking of this trend set a grueling pace for all chip developers. This trend makes transistors ever cheaper and faster (good) but also invites system buyers to expect constant improvements to functionality, battery life, throughput, and cost (not so good). The moment a new function is technically feasible, the race is on to deliver it. Today, it is perfectly feasible to build SOC devices with more than 100 million transistors, and within a couple of years we’ll see billion-transistor chips built for complex applications; combining processors, memory, logic, and interface.
High integration creates a terrific opportunity. The remarkable characteristics of CMOS silicon scaling allow the cost, size, performance, and power for a given function to all improve simultaneously. This scaling allows continuous improvement in end-product benefits: longer battery life, smaller size, more functionality, and higher user productivity. This scaling has been a primary driver for the parallel revolutions in digital consumer electronics, personal computing, and the Internet. Moreover, most observers expect the scaling trend to continue for at least another 15 years.
The growth in available transistors creates a fundamental role for concurrency in SOC designs. Different tasks such as audio and video processing and network-protocol stack management, can operate largely independently of one another. Complex tasks with inherent internal execution parallelism can be decomposed into a tightly-coupled collection of sub-tasks operating in parallel to perform the same work as the original non-parallel task implementation. This kind of concurrency offers the potential for significant improvements in application latency, data bandwidth, and energy efficiency when compared to serial execution of the same collection of tasks with a single computational resource.
If high silicon integration is a terrific opportunity, then the design task must be recognized as correspondingly terrifying. Three forces work together to make chip design tougher and tougher. First, the astonishing success of semiconductor manufacturers to track Moore’s Law gives designers twice as many gates to play with every two years. Second, the continuous improvement in process geometry and circuit characteristics motivates chip builders to design with new IC fabrication technologies as they come available. Third, and perhaps most important, the end markets for electronic products—consumer, computing, and communications systems—are in constant churn demanding a constant stream of new functions and performance to justify new purchases.
As a result, the design “hill” keeps getting steeper. Certainly, improved chip-design tools help—faster RTL simulation, higher capacity logic synthesis and better block placement and routing all mitigate some of the difficulties. Similarly, the movement towards systematic logic design reuse can reduce the amount of new design that must be done for each chip.
But all these improvements fail to close the design gap.
Even as designers wrestle with the growing resource demands of advanced chip design, they face two additional worries:
- How do design teams ensure that the chip specification really satisfies customer needs?
- How do design teams ensure that the chip really meets those specifications?
Further, a good design team will also anticipate future needs of current customers and potential future customers—it has a built-in road map.
Access the entire document on the Tensilica, Inc. website.