Quantum of solace

Circuit designers are turning into digital techniques to improve analogue performance on submicron processes.

A desire to produce data converters at the latest and more cost-effective CMOS semiconductor manufacturing nodes has led to increasing interest in replacing portions of traditionally mixed-signal circuit design with digital logic.

This trend has accelerated over the last three to four years, particularly as chip production has moved into the deep sub-micron era.

As the on-chip real estate and power requirements of analogue design present obstacles for any process shrink, digital-for-analogue (DfA) is seen as a way of stimulating activity in emerging sectors such as smart and tiny ultra-low power sensors, especially those that depend on power scavenging, as well as in better established markets such as mobile communications.

High resolution

One factor that is helping to push along DfA innovation is resolution. To date, even its most aggressive proponents have only been able to achieve resolutions in the low-to-medium range - up to about 10bit - implying that the resulting designs must concentrate on conversion tasks that are relatively easy to accomplish or that are highly defined and easily recognised.

An important factor here is that achieving high - and, some would say, even medium - performance resolution has traditionally demanded the availability of both high-gain operational amplifiers (op-amps) and a high level of capacitor matching. Once a designer starts to work at the 90nm node or below, however, it entails a low supply voltage, making op-amp implementations especially troublesome. From 90nm downwards, there are thorny issues with analogue transistor perform-ance, many of them related to yield and design-for-manufacture.

Last month's International Solid State Circuits Conference (ISSCC) did see the resolution envelope pushed very slightly. Researchers from Taiwan's MediaTek described a digitised 1V 200Msample/s pipelined ADC with 11bit resolution that had been fabricated at the 65nm node. They also said that their target was to translate their approach to DfA into high-definition video and high-end communications.

However, a more typical example of the DfA innovation dominating events like ISSCC was offered by the Belgium-based research institute IMEC, one of this trend's real pioneers. Its headline paper at the conference described an ultra-low-power ADC based on its 'comparator-based, asynchronous binary search' (CABS) architecture. This combined a 1bit 'coarse' ADC and DAC step that is followed by a 6bit sub-ADC. Inputs are applied to all the device's comparators simultaneously, but only six are used for the binary search. This drives the efficiency of the 7bit, 90nm CMOS device to a figure of merit of just 10fJ/conversion step for a sampling rate of 150Msample/s.

Design hotspots

While the resolution debate continues, the DfA trend in converter design is beginning to coalesce around a number of priority design tasks and goals. Particular sweet spots for innovation are parallelism, visible in IMEC's approach, calibration - a key part of MediaTek's strategy - and redundancy.

Another example of parallelism was contained in a joint ISSCC paper from the University of Texas at Austin and Analog Devices. It seeks to combine the performance benefits of the successive-approximation ADC with the power advantages of a flash ADC. The background to this is explained by authors Zhiheng Cao, Shouli Yan and Yunchu Li.

"The energy per conversion of a successive-approximation ADC is approximately linearly proportional to the resolution, while that of a flash ADC is exponentially proportional," they write. "This implies that successive-approximation ADCs are more energy efficient, only when the number of bits is larger than a certain threshold, below which flash ADCs become more efficient. It is also known that when the clock frequency is close to the upper limit of the process, the energy per operation increases dramatically and a little reduction in speed can be traded for large power savings."

In response, the UTA-ADI, apply a parallelism strategy by treating each large successive-approximation conversion as a cascade of multiple, smaller successive-approximation conversions with a smaller bit-count, and as a result, "increasing sampling frequency and/or power efficiency can be considered by replacing each sub-conversion process with the flash architecture".

In the final chip, the design therefore has two successive-approximation ADCs clocked at 2.5GHz but run in a parallel and time-interleaved architecture to sample at 1.25GHz. It has been fabricated in 130nm CMOS with 6bit resolution and has a 32mW power rating operating at 1.25Gsample/s.

Calibration is a response to the 'imperfections' encountered in analogue transistors under aggressive process node scaling, and can run either in the background or foreground of digitised data-conversion processes (see page 34). One of the key challenges here is finding ways to control the overhead it requires to converge different parts of the data stream: for example, designers have used both split and two-channel ADC architectures in the past to enable signal calibration at speed. The alternatives have been calibration times of several seconds or limits, far too long for many of today's applications.

The problem with traditional strategies, though, is that they greatly increase the chip's footprint and increase power consumption, threatening to cancel out the advantages of moving to deep sub-micron production.

In the MediaTek design, a stage output-code generator (STOG) is inserted into the calibrated stage, made up of a sub-ADC with reference levels set by the stage itself. It outputs the original multiplexed ADC code. This and the STOG code are then used to calibrate code that has passed through a digital background-calibration processor and correlated against a version of the original that has had a pseudo-random signal inserted into it. The result, MediaTek says, can deliver a 16-fold improvement in the final convergence.

Redundancy, finally, is perhaps the most obvious design strategy available for aggressive process nodes. Density of the chips is so high that some transistors can be left unused to guarantee performance and hedge against production yield problems.

A design presented at ISSCC by researchers from the Massachusetts Institute of Technology showed how far redundancy can be taken to meet both performance and manufacturing criteria.

Its highly digital 6bit flash ADC contains two arrays, each of 127 dynamic digital comparators that are used in combination with static voltage offsets to determine the comparator thresholds. A critical point here is that these threshold values can vary from die-to-die, so the process is one of determining which of the 254 comparators to enable - and for the ADC's pseudo-differential mode this number can be as low as 126.

More radical approaches to digitised data design are being taken in addition to those based on the trio of parallelism, redundancy and calibration. Indeed, all three often need to be augmented by such techniques. But there is a feeling that more can still be squeezed out of these three umbrella areas to take ADC and DAC design further.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them