The heart of the device
What impact will advances in processing capability have on communications devices?
The heart of the device
As one well-quoted version of Moore's Law says, processing capability will double every 18 months (see article on p36). This article looks at some of the techniques that are supporting this exponential expansion of computer brain-power. Subsequently, it will explore the impact that advances in processors will have on future devices - after all, processor technology is the heart of the device.
It's hard to imagine how the first moon landings were achieved considering the power of computing technology at the time. The personal computer of today is massively more capable than those of even just a few years ago, though for many users this is often unapparent. The constant struggle of the chip manufacturers is to clock up faster processing rates while reducing, or at least stabilising, power consumption and heat generation. This is especially important against the backdrop of miniaturisation and increasingly portable consumer electronics equipment. Current wireless technologies employed in these products often exacerbate the power consumption problem.
One technique employed over recent years has been to farm out some processor intensive functions such as graphics and display driving to specialist processors optimised for the purpose, leaving the general processor available for other tasks. The high-end GPUs (graphical processing units) are now themselves reaching performance figures of processing many billions of pixels per second. These specialist units are also borrowing some of the same techniques of their central processing unit (CPU) cousins.
Another technique has been to shorten the paths between the processor and the immediate storage it uses, introducing various levels of on-board cache memory. This addresses problems with throughput bottlenecks in the system. The time taken to fetch instructions and write back results is speeded up through such caching.
Some leading chips already incorporate 12 or 16Mb of cache memory. It's not so long ago that PCs only had that amount of RAM inside them in total! In terms of getting data to and from the latest processors, so called Front Side Bus (FSB) speeds become as critical as the processor speeds themselves. However, as the FSB itself becomes a bottleneck, the concept is becoming replaced by new bus architectures such as HyperTransport.
Perhaps the most striking advances in processor manufacturing and design in recent years have been in die sizes. The physical size of the manufacturing process has shrunk dramatically from 90nm (nanometres) to 65nm and more recently to 45nm. To put it in perspective, hundreds of transistors used in the 45nm process would fit on the surface of a human blood cell. Smaller die sizes mean higher performance, reduced power consumption and potentially lower costs. The major chip manufacturers are already announcing plans for a 32nm processor by 2009.
The trend that follows naturally from the smaller die sizes is that of incorporating more processor cores per chip. Already PCs you can buy today may have dual or quad core processors. In the future, we will see many tens of cores employed. Such processors will be fully exploited when software running on these machines is written to take advantage of parallel threads of processing. Cache memory on these multi-core processors is now being made sharable between them.
So what will systems and applications software developers do with all this extra processing capability in the future? First we can imagine that some 'spare' capacity will be used to enhance the usability of the computers and other devices we have. This may be enabled partly by the adoption of more natural forms of interfaces for humans, but which require more heavy-lift processing by machines. Examples are speech recognition, computer vision, and gesture interpretation.
Return of the lag
Another use for extra processing capability is the ability for portable handheld devices to effectively run more scalable and capable systems software, which allows for responsive, multiple applications to run simultaneously. We are already starting to see this on a few devices - however, for many people there is still a noticeable lag when running processor intensive applications on mobile devices and often the user is limited to running one application or service at a time.
Regardless of how these advances in computer processing capacity are used, it promises to offer a very exciting future for electronic devices of all types, as the raw processor power of the chip approaches and then overtakes the raw processing power of the human brain. Maybe it will be the machines that ultimately exploit the potential of the power available to them...?