September 25, 2006 -- “Wait a minute; don’t forget to add the memory.” How often did we hear that phrase in the design labs of the 1980s and early 1990s? Back then, the DRAM industry was such that Henry Ford himself would have been proud to be part of it. Just like the singularity of his model T and the near universality of the color black in early car designs, the OEM could have any type of DRAM, as long as it was Fast Page Mode. Ironically, the tables have now completely reversed and the only thing that hasn’t changed since then is the fact that DRAM epoxy is still black. Everywhere else, memory homogenization has been supplanted by a challenging new design environment.
It’s surprising to even hear the word “commodity” still being used in some DRAM discussions. Memory is now anything but a commodity.
Impact of DRAM decision
The DRAM design decision has rapidly moved up the “what matters” chain in terms of overall system and platform layout priorities. Developers actually face their toughest challenge in creating “hot” computer or mobile products in a way that sets them apart. A wide range of DRAM characteristics is playing a role in this differentiation: smaller, faster, denser, cooler, easier to scale.
Despite tremendous technological advance, designers simply can’t have it all. Memory requires hard choices. Faster means hotter. Denser means bigger.
While memory has become the critical linchpin of many designs, it also holds the potential to become an Achilles heel when designers short its potential in a wayward effort to enhance the selling proposition. Samsung has found that DRAM memory not only opens the window to more useful design in consumer electronics, but it also increases both the selling and value propositions for upgradeable computer systems if designers plan accordingly
For servers, one design challenge has been resolved. With the advent of 64-bit computing, speed-density tradeoffs in selecting memory have been all but eliminated. Server operating systems have virtually removed the ceiling on DRAM density.
But one caution underscores the importance of the DRAM server design decision regardless of how creative the board layout – the need to overcome loading issues when increasing DRAM density. Today, the move to fewer and fewer ranks per DRAM module enables increases in overall DRAM density. Server layout typically offers four DIMM slots per channel, which can be optimized by populating each slot with dual-rank DRAM modules. Populating with quad-rank DRAM modules, unfortunately, limits overall system DRAM capacity. For this reason alone, the highest performing server architecture can be crippled by the use of the wrong DRAM module.
To improve a designer’s choices, Samsung has developed 1Gbit-based DRAM modules. With these, single-rank 2-GByte modules can be specified, but more significantly, a dual-rank 4-GByte module enables server users to make use of up to 32GBytes of DRAM per channel. The primary choices for server designers today can be found in Figure 1.
Figure 1. Proliferation of DRAM component alternatives that have surfaced during the past decade.
To illustrate the challenges designers face in specifying DRAM, we can look to how DDR2 technology was adopted in the notebook market. DDR2 consumes 25% less power, operating at 1.8V in a SODIMM form factor. Moving to 512 Mbit-based modules, notebook designers can build a single-sided 256-MByte SODIMM to save space, improve thermal performance and increase airflow. 1-GByte modules will likely become a silver lining for notebook high performance, with a 2-GByte module representing a strong alternative. A x16 configuration is today’s defacto notebook memory standard, since it typically conserves power best when addressing the memory chip.
Beyond notebooks, an amazing proliferation of consumer electronic (CE) devices has complicated the competitive landscape with convergence forcing a longer-term view of DRAM alternatives in maximizing system flexibility.
With over half of today’s memory market cornered by consumer electronics, DRAM versatility matters even more than in past years. Most data-centric CE devices are just as dependent upon using some DRAM as it is in using Flash memory.
Today, cell phone memory configurations are migrating from a NOR/ SRAM combination to NAND/ DRAM, with potential mid-range specifications carrying 512MBytes of each. Moreover, the expansive breadth of memory products has fueled innovation in the MCP (multi-chip package) memory market. Stacking up to eight die per chip, DRAM densities of 2Gbits are now possible.
In the gaming segment, a typical console is configured with 512MBytes of DRAM – the same amount of DRAM offered by many desktop and laptop PCs. Moreover, speeds of up to 1800Mbps are now available for premium GDDR3 memory. Standard x8 configurations are too limiting to drive much of today’s graphics demands. And, while x16 might be considered adequate in some applications, x32 GDDR3 1800Mbps will thoroughly enhance a graphics environment.
The application hardware era
Wherever designers look, user-specific system requirements are not being fully met, but with greater care in specifying DRAM, the situation clearly improves. While DRAM cannot offer the level of customization of an ASIC, it does offer a feature set that is dynamic enough to generate more efficient system- and platform-level customization. In achieving this goal, evidence already points towards further segmentation of the DRAM product mix. The DRAM design block is spawning an era of “Application Hardware” largesse that is calling on better-informed designers to improve the user experience, well before software-enabling factors come into play.
In today’s market, just two or three memory design variables can make a multi-million difference in the speed of product entry, the depth of market penetration and the extent of user satisfaction.
By Tom Trill, Director, DRAM Marketing, Samsung Semiconductor, Inc.
This article originally appeared in Computer Technology Review.
Go to the Samsung Electronics Co. Ltd. website to learn more.