Architectural-Design Considerations for Implementing Hardware Acceleration
Publication: EDN Magazine
September 29. 2005 -- Across a range of embedded-system applications, the combination of data-processing and system throughput requirements is increasing to the point at which implementing algorithms purely in software on a single high-powered CPU is exposing two challenges. First, system power and cost are forced upward. Besides the obvious battery-life issues that exist for mobile platforms, rising power dissipation increases the requirement for heat sinks and supplemental cooling. Second is the issue of implementing value-added functions to a system when handling the baseline system functions fully occupies the CPU's processing capacity — especially when a designer cannot implement the new functions without including additional components.
What options are available? For the purposes of this article, the choices break down into three areas. Customizing the CPU's instruction set for the application can markedly improve algorithm-processing efficiency. The usability of development tools for harnessing such cores continues to significantly improve over that of a few years ago. This strategy could potentially bind a designer to a specific implementation that can over time cause legacy-software issues.
By Ian Ferguson. (Ferguson manages QuickLogic Corp.'s division of Embedded Standard Products devices, including QuickMIPS.)
Reprinted from SOCcentral.com, your first stop for ASIC, FPGA, EDA, and IP news and design information.