21 May 2013 by Chris Edwards
To help its engineers make informed decisions, vehicle-maker Volkswagen set out to find microcontroller (MCU) benchmarks that focus on energy consumption rather than performance. The company has helped define an initial set of tests developed by the Embedded Microprocessor Benchmark Consortium (EEMBC) that have now gone public as the group tries to recruit more automakers.
Markus Levy, president of EEMBC, says problem VW and others face is that MCUs from different vendors have their own approach to sleep states. Not only are they not named consistently, they often do not match in terms of their wakeup times, which can make some of the modes difficult to use in real-world applications.
Rather than try to develop a normalized test, in keeping with the approach taken by EEMBC on its existing performance benchmarks, the new benchmarks "will let MCU vendors optimize for the test workload that we provide for them", Levy said. "It's impossible to standardize the operating modes. They can determine what sleep modes make the most sense for each benchmark.
"We are not targetting powertrain for this but more body electronics and instrument clusters. The MCUs don't have as much horsepower and are often in sleep mode," said Levy.
A PXI chassis and test rig made by National Instruments is used to generate traffic and capture test results, which attempt to replicate typical behavior in body electronics. "VW is coming up with the knowledge of how things like a trunk latch mechanism works," said Levy.
"One of our goals is to measure the overall energy efficiency as transfer frequency changes and also look at what happens when we increase the frequency of packet transfers to the point where we start to get dropped messages," said Levy. "This will go beyond just measuring energy efficiency but also look at robustness. Do these modes affect capability? These are things that are not normally tested. We can look at if you use these features, will the system operate the way you expect?"
Chipmakers will be able to use hardware-assist features such as low-level interrupt managers as these are often placed on low-energy MCUs to reduce the amount of time that the processor core spends awake.
Although vendors will be able to optimize code, they will not have total freedom - EEMBC tries to develop benchmarking rules that overcome old tricks such as optimising code blocks away completely in order to deliver 'better' results. Silicon vendors will have to use mostly standard software based on an existing framework: the Autosar Microcontroller Abstraction Layer (MCAL). "It takes a lot of the gamesmanship out of this. We don't want vendors to come up with ultra-optimized drivers for these benchmarks. MCAL makes the setup harder but helps to level the playing field and it will be what they will be using in automotive."
Trouble at the top
12 May 2013 by Chris Edwards
For both Intel and Microsoft - companies that profited massively from IBM's momentous decision to outsource much of the technology for the PC - their trajectory over the next few years could well look like IBM's as the 1980s came to and end.
In the early 1990s, IBM went from being the company that no-one got sacked for choosing to the company that had to sack a bunch of people in a hurry to stay in business. In 1993, IBM reported the biggest loss in US corporate history.
Three years earlier, Big Blue had come to realise that its decision to engage Microsoft in producing the OS/2 operating system meant to succeed DOS had been a mistake - Microsoft had quietly and then publicly switched horses to promote its own Windows operating system. In hindsight, however, it simply meant that the inevitable fall was accelerated and arguably helped set a path towards a reinvigorated company rather than a slow, drawn-out death.
Although it's possible to point the finger at IBM and say the company was wrong to go with off-the-shelf, easily cloned parts for the personal computer, in reality the company was fighting an idea - the decentralisation of computing. If IBM had not invented the PC, the chances are that Apple would have come to dominate personal computing in the 1980s. IBM's fall would have come later and Apple itself would have found itself prey to a generation of cheap 'PCs' running something free and Unix-like.
Whatever happened, the old hegemony was going to come to an end and things would have to get a lot worse before they got better. IBM had to free itself of the monopolist image of the company. It still has not gone as recent antitrust cases indicate but, for the most part, computer users have many more choices than they did in the past.
Users now have more choice in personal computing than they have had since the end of the 1970s. As Android moves in at the tablet end, the position of Intel and Microsoft is now beginning to look less favourable than it did just two years ago. And neither company is particularly well-placed to carve out a slice of the new market when customers can recall only too well what happens when either gets into a dominant position. In truth, that has been true of any technology company in the past 50 years, but the devil you don't know is generally the one that customers tend to favour when the old order stumbles into problems.
IBM demonstrated that companies of such size and power have a long time to reinvent themselves - but for customers to accept what they could become they might have to see things get a lot worse before the turnaround starts. The moves by Intel and Microsoft suggest they see what's ahead but it's hard to see either being in the position, yet, to stare into the abyss and make the changes that are necessary.
The embedded tools cycle turns again
30 April 2013 by Chris Edwards
Code Red is a comparatively new player in the software tools and it is easy to see why NXP was interested in buying the company. Code Red developed a suite of tools built around the open-source Eclipse environment that were aimed specifically at the NXP LPC series of ARM-based microcontrollers. However, recently, the Code Red implemented support for competing devices offered by Freescale Semiconductors. This time next year, the Code Red operation will cease support for the the non-NXP parts.
Although it's always possible to look at these deals as a way of freezing out a competitor, they are generally more about giving the microcontroller supplier a way of encouraging developers choose its products. But there has always been a tension around this decision - when a supplier is in a strong position, it often makes sense to favour third-party tools suppliers operating in a larger 'ecosystem' than to try to keep tools inhouse.
In the late 1980s through to the mid-1990s, Intel and Motorola operated these opposing strategies in the embedded market. Intel had a broad suite of inhouse tools. Motorola, which dominated the microcontroller and embedded processor market, had a healthy group of third-party companies supporting its architectures. There was no need to have inhouse tools. Later in the 1990s, Motorola bought Metrowerks at roughly the same time it began to lose its grip on the market. By that time, Intel had given up trying to sell its own tools.
As the 1990s drew to a close, Microchip Technology demonstrated successfully how its own, very low-cost tools could help the company break the hold Motorola (later Freescale) had on the 8bit market. A number of microcontroller startups have attempted to use the same technique to break through themselves.
NXP's purchase indicates how competitive the ARM-based microcontroller market has become. Just about every company selling a 32bit microcontroller today has an actively used ARM licence. And those that aren't probably will quite soon. It has made the market intensely competitive. Software tools provide an additional way to compete that isn't just based on hardware price and features. If designers can get up and running on a family of devices faster because the tool support is better, they are likely to keep buying.
But, if the plan pays off for NXP - and others who folllow suit - at the expense of those who don't the chances are the cycle will turn again and the shift will be back to dealing more heavily with third-party tools providers. The software-tools cycle continues to turn.
AMD's split personality
28 April 2013 by Chris Edwards
The PC market has taken a hit from the growth of tablet computing and that had a knock-on effect on AMD, which found itself seeking new management and a new direction.
Arun Iyengar, vice president and general manager of AMD's embedded business unit, says the company plans to become more "balanced" as it reduces its reliance on the desktop market. By the fourth quarter of 2014, the company aims to have 20 per cent of its business coming from embedded applications. It's a tall order, unless the company is simply expecting the desktop market to decline rapidly. But the company is in a novel position.
Although AMD still sells processors from the Geode line acquired from National Semiconductor, this is the company's first foray of its own into embedded processors for more than a decade. The architectural choices that the company has made pitch the G series APU as a higher-horsepower alternative to Intel's Atom family.
To offer a design that can provide better performance than the current Atom lineup, AMD has opted for a quad-core design derived from its desktop accelerated processing units (APUs), which combine a general-purpose x86 subsystem with a graphics processor unit (GPU).
AMD is not going to lay all its bets on the x86 architecture. Another family of G-series processors will turn up later based on the ARM architecture - the company has already taken out a licence to have a go at the low-power server market. Competition in the ARM sector is much stiffer but by picking ARM and x86, AMD is in an unusual position of being able to support both of the architectures that are likely to dominate embedded processing for a while. Intel has an ARM licence, but unless the x86 starts falling badly behind, is in no hurry to use it.
Change is on the way for computer architecture
22 April 2013 by Chris Edwards
So-called scale-out applications - big-data software used to handle search requests at Google and social-network queries at Facebook among others - are stressing today's processor designs. Or rather, they aren't, because the memory they are intended to use cannot keep up.
These applications have a massive thirst for memory and are usually designed to share the burden across many server blades. As a result, the applications code is relatively small but the amount of data they can access is extremely large. Not only that, once a piece of data has been analysed, it is unlikely to be needed again for some time. This is different to many conventional desktop and server applications which use relatively small but fast caches to store data temporarily so that it can be reused more easily. In a scale-out application, researchers have found that caches have limited use.
So, these processors spend a lot of time waiting for memory and relatively little time actually dealing with it. So much of the performance is down to latency that even sending the data when it is ready is not that sensitive to memory bandwidth.
When first analyzing server performance while working at Microsoft, Christos Kozyrakis, associate professor at Stanford University, thought the memory usage of scale-out applications was an experimental error. "The CPU utilization was high but memory bandwidth was extremely low," he said. "At first, I thought it was a mistake."
Short of a massive improvement in main memory latency, the reality is that scale-out servers do not need the bandwidth. And memory deisgned for high bandwidth is simply wasting energy. There is a problem though. You cannot easily trade off bandwidth against energy in servers with today's components.
"It turns out that there is a very good technology available now, which is LPDDR2," said Kozyrakis. "Can we build server memory out of LPDDR2 chips. You can't build high-capacity chips because they were not designed for that. However, you could achieve capacity through die stacking."
Simulating an LPDDR2-based server against a conventional design, Kozyrakis found no appreciable difference in performance. "You didn't need the bandwidth to begin with."
At EPFL in Switzerland, a team simulated a number of server designs and found that it makes sense to rework how processors talk to memory - as well as the cache design - to waste fewer resources on functions these processors do not need.
Similar stories are emerging in embedded computing where concerns over battery life and power are encouraging designers to think again. One example is Xmos which developed a new type of multithreaded processor that does not cycle aimlessly when it has nothing to do. Because most processors assume a Unix-like timesharing environment, they will continue to run an idle loop when there is nothing to do. The Xmos architecture only runs when there is work to do, which cuts some of the power it would otherwise need.
Kozyrakis reckons it's "a great time to be in computing" because of the demand for change that is now building up.
Mead and Conway's legacy of cheaper access to silicon
8 April 2013 by Chris Edwards
Introduction to VLSI Systems by Carver Mead - who coined the term "Moore's Law" - and Lynn Conway remains the blueprint for the way engineers think about IC design, despite some of the chapters turning out to be off the mark. At a debate during the recent DATE conference in Grenoble, France, the question came up: what is the legacy of Mead and Conway's book?
The answer may seem surprising: cheap prototyping. But it's a legacy that's continuing to deliver IC designs at a fraction of their headline cost. It's a legacy that allows crowdfunded companies such as Adapteva - which raised close to $1m through a Kickstarter campaign - to stand a chance of getting a 28nm multiprocessor into production. The key is a relatively cheap prototyping option called the multiproject wafer (MPW).
The principle behind the MPW is simple. The masks used to project an image of the design onto silicon wafers are usually much larger than most chip designs. The mask's reticle typically defines an area of around 600 square millimetres. Some designs do fill this space - notably Intel's very expensive and not exactly successful Itanium processors. Most are much smaller - only a centimetre on a side or less - because yield often collapses as you make the chips bigger. Splitting the design across two is often more economic.
Because an individual chip is much smaller than the reticle, there is nothing to stop people from sharing a mask set and its cost. That led to the birth of the concept of the multiproject wafer (MPW)
A key thrust of the Mead and Conway approach was to provide students with hands-on experience of chip design. "IC design was an expensive proposition. But putting multiple chips onto the same wafer allowed people to do design at a cost that was not prohibitive,"
said Professor Alberto Sangiovanni-Vincentelli of the University of California at Berkeley.
Bernard Courtois, director of CMP and one of the first to teach Mead and Conway's methods in Europe, said: "One major follow-up was the establishment of service centers for manufacturing ICs, sharing the cost by sharing the wafers."
Today, the multiproject wafer services provided by groups such as MOSIS, CMP and Europractice provide hundreds of universities, research institutes, and small and medium-sized enterprises (SMEs) to gain access to even advanced process technologies such as 28nm CMOS and fully depleted silicon-on-insulator (FD-SOI).
The reason why it is possible to share designs and have them manufactured at a fab at the same time is that the separation of design and manufacturing that Mead and Conway proposed has was upheld successfully. Previously, there was a temptation to change the 'recipe' on the production line for individual products.
"The principle of design foundries was born then," said Sangiovanni-Vincentelli. "The idea that you have to limit the freedom of design using regular structures using tools was another important principle."
Professor Jan Rabaey of UC Berkeley said the imposition of rules maintained a clear separation between design and manufacturing but "maintained a degree of freedom to allow the designer to be creative".
Antun Domic, senior vice president and general manager of the implementation group at Synopsys, said: "The separation of design from manufacturing can't be underestimated. You were sending a layout to someone and a chip would then come back.
"There were misses," said Domic, pointing to the book's emphasis of the then leading technology NMOS. "CMOS was much closer than anyone thought. I remember that the first Vax processor was NMOS. The second was CMOS."
The other miss was the concept of the 'silicon compiler'. Commercial tools that built on this worked ultimately failed. However, the idea of using a high-level language to generate chip layouts today lies as one of the cornerstones of modern design.
Time for the cluster of clusters
31 March 2013 by Chris Edwards
Four of the European centres for electronics research and business development have set up a project to try to create a virtual "silicon cluster".
Gerhard Kessler, project manager at Silicon Saxony Management, said in a meeting of the four groups at the DATE conference in Grenoble, France: "The partners are linked by a common goal, to create the world's leading centre for high-efficiency electronics. We want to secure European knowhow for Europeans. We want to open up new markets and create new opportunities for SMEs."
Fabien Boulanger, director of micro- and nano-technologies, said: "The cluster can be much more visible on a European scale, making it possible to go for more projects."
The four groups involved in the three-year Framework Program 7 project are: Silicon Saxony, centred on Dresden, Germany; Minalogic in Grenoble, France; DSP Valley of Belgium; and High Tech NL from The Netherlands.
As yet, clusters in the UK and other countries have not joined the project, which aims to scope out what form a super-cluster might take. If they do, they will not be funded by the EU, which may hinder its expansion, at least in this particular project.
Kessler said the project got the go-ahead from the European Commission after Minalogic and Silicon Saxony started to co-operate more closely three years ago. The regional development directorate general (DG Regio) said it would back a project where "at least four regions would have to work together", Kessler explained.
"The whole thing came together at very short notice and only because Dresden and Grenoble were already close. They had connections to Holland and Belgium," said Kessler. "The EU also wanted to make sure the bigger clusters were involved."
"We have invited other clusters to take part. We are in the process of adding other clusters as invited members but the core activity is done by these four groups," Kessler said.
Véronique Pequignat, manager of business investment at Grenoble region development agency AEPI, said: "It was definitelty a bottom up approach and it had to be in regions where research, education and local government were partnering together."
Kessler added: "For me, I was longing for such kind of co-operation for 20, 30 years, when I asked myself 'why don't the regions work together?' But then they were rich and nobody thought about it. The situation has changed. We have to learn to co-operate."
"Distributed phones" to drive the Internet of Things
26 March 2013 by Chris Edwards
The sensors themselves will not be expensive. They have to be small and run for ten years or more off something like a lithium-coin battery. Even so, there is no guarantee that anyone scattering temperature, chemical and other sensors around offices, shopping centres and streets will collect much of Cisco's estimated $14tr. Most of the money is going to go to applications that consume the data they produce by combining the readings of many of them together. It's not entirely clear how the sensor installers get paid.
During his keynote at the DATE conference in Grenoble, France earlier this week, Benedetto Vigna, general manager of the analogue, MEMS and sensors group at STMicroelectronics, outlined a way that many of these sensors will get into the world. We will be carrying them.
Vigna runs the group at ST that helped get motion sensors based on MEMS - micromachining technology - into the Nintendo Wii and the Apple iPhone. "We have in front of us a unique opportunity, similar to that five years ago in MEMS. The Internet of Things is at the same stage we had five to seven years ago. We are in that phase - then they asked 'why do you need accelerometers in a phone?'
Vigna said people such as Apple's head Steve Jobs saw the potential half a decade ago. "Mobile phones were using MEMS from 2002. But they only became a selling feature with the iPhone 1," Vigna said.
Vigna said there are four key development areas for MEMS that will help drive the Internet of Things: motion; acoustic; environmental and chemical sensors; and microactuators "where you need to make small movements".
Vigna argued that the mobile phone will drive many of them: "The mobile phone is an excellent opportunity to miniaturise technology. The smartphone is a way to stress suppliers and research centres to optimize the sensors for a volume market. Over the next few years, the phone will start to have environmental sensors - measuring pressure, temperature and humidity."
At the same time, the phone will make greater use of sensors around it using accesses to the cloud. Vigna called this the "distributed phone".
Looking at Vigna's projections, a distributed phone is not all that likely to be pulling its data from ambient sensors - it is more a device that is not too fussy about where it gets some of its information. The chances are that most readings will come from sensors inside others' phones. Think of it as sensor crowdsourcing.
British and Russian schools of thought converge on hot electronics
18 March 2013 by Chris Edwards
"The issue of heat is becoming more critical in electronics," says Keith Hanna, director of marketing and product strategy at Mentor Graphics' mechanical analysis division. Entries for Mentor's own PCB awards have shown that area consumed by components has halved since 1995 and component density risen by 3.5 times.
At the same time, the use of forced-air cooling has become more problematic. Consumers do not like the noise and data-centre operators do not appreciate the air-coniditioning bills. In automotive, the situation is even more serious. Hybrid and electric vehicles need high-density power electronics but excess heat has led to a number of expensive, high-profile vehicle recalls.
Despite these issues, heat simulation often gets left to the end of a project, largely because the job is hard to do earlier, says Roland Feldhinkel, product line director for Mentor Graphics' mechanical analysis division: "CFD is quite complex technology and can't be applied automatically in each stage of the design process."
The big problem is that building the mesh that describes a physical system to the differential equations used in CFD is tricky and troublesome under the methods taught by the 'British school'. Most CFD simulators in use today derive from concepts developed by Professor Brian Spalding at Imperial College. Each cell needs to contain a space that contains either fluid or solid so the mesh needs to follow the physical structure, no matter how irregular.
Electronics designs have the benefit of being mostly rectangular, which is a good for CFD meshes, but they are being squeezed into curved packaging. Heatsinks are being machined to fit underneath curved surfaces and memory sticks are being tilted to reduce system height. These changes stop the components from being described as objects with edges that lie on the pure Cartesian grid fluid-flow specialists would like to use.
An alternative approach was taken by researchers in the Soviet Union, starting in the 1980s, that made greater use of empirical experiments to gauge how fluid flows around complex objects and used those to drive simulation.
Feldhinkel says: "They had a tremendous lack of computer resources but practically unlimited access to all testing facilities in the Soviet Union. The CFD data was calibrated against nature from the beginning.
"The result was that the boundary layer does not need to be solved with a mesh," adds Feldhinkel.
Instead Mentor's new tool Flotherm XT can use 'wall functions' - formulas that take account of the cell's resistance to transporting momentum and heat - derived from the empirical data. As it decomposes the design, the tool matches the 3D structure to a mesh of reasonably regular cells that contain different wall functions.
"The software does not necessarily converge immediately but the tool will re-identify the solver parameters and start again if it cannot converge. It works with a knowledge base inside the software," said Feldhinkel.
The result of the merging of the two schools of thought, in principle, is a more flexible tool that can handle automatically the unusually shaped designs now being put together.
8 March 2013 by Chris Edwards
The embedded systems industry is beginning to come round to the idea that the systems need to be better protected. And if they were not ready they had Stuart McClure, CEO and president of Cylance, to ram the point home as he described a number of hacks during his keynote at the recent Embedded World show in Nürnberg this week.
The trouble with embedded systems, said McClure, is legacy. Not only do they get installed and sit around in systems for a long time, they have to communicate with older systems that were designed in a more innocent age. An age where things were not routinely wired up to the internet.
"I often hear this argument: 'But we're embedded. Our systems are not accessible'," said McClure, but he added that catalogues of internet-connected devices show that a growing number of embedded systems are now accessible remotely and potentially vulnerable to hacks.
Legacy can bite new systems. McClure described a hack on 'smart TVs' that uses the humble infrared sensor - an input fitted to TVs since the 1980s. He explained he got the idea for the hack when he walked into a lobby that contained 15 of the Samsung-made smart TVs and rang a colleague to work out what the device's weakness would be. Although initial work focused on the Bluetooth port - a common vulnerability of mobile phones - they found that the IR port provided easier access to the core firmware as, being a port carried over from simpler, older designs, performed no authentication at all.
Cylance developed a high-power transmitter based on an IR laser rather than the LED found in most remote controls so that it would function at distances up to 300m. "With that we can perform full reconfiguration of the TV including its ability to act as a wireless access point. We can connect in, start up the access point and then hack devices on the network. We are going to hack other devices on the network and extract personal information such as credit card numbers and then send them out through a Twitter account," McClure explained.
"The latest smart technology has legacy challenges. You have to carry this kind of legacy technology but it ends up being the Achilles heel of the embedded system," added McClure.
A second demonstration involved a hack on an industrial communications system that interfaced to a compression pump - the pump was ordered to overpressure a plastic bottle to the point where it exploded. "These [communications] systems are huge. They allow you to control any infield device that is supported," said McClure, pointing to the range of I/O ports available on the example system made by Tridium. "Every time you add an input you have a new form of attack.
"The virtual and physical worlds are coming together and we are going to have to deal with this," said McClure.
The end of the mobile generations
20 February 2013 by Chris Edwards
At Mobile World Congress (MWC) next, operators and equipment vendors will be all over 4G but what they will be selling and using is no longer a single network protocol but a smorgasbord of standards. The operators expect to stay on a mixture of 2G, 3G and 4G for some time to come because it is too expensive and problematic to turn on all the 4G features at once.
Even if they had wanted to, it took until last year for the industry to come up with a standard way to get voice calls onto the 4G network in such a way that, if a user wandered out of range, an existing 3G cell could pick up the call. South Korean operators have upgraded users onto high-definition voice on 4G, but they are practically going it alone. The others are not in that much of a hurry - dedicating voice to the existing 2G and 3G networks makes more sense for them right now even though 4G is more spectrally efficient.
At the data level, users will be switched dynamically between different wireless standards - sometimes 4G on a macrocell, sometimes 4G on a denser urban network of femto- or metrocells or passed over to pockets of WiFi. As the available 4G spectrum fills up with data, operators will look not so much to more complex ways of encoding data - Shannon's limit now looms large there - but of managing users and basestations so that free capacity does not go to waste.
The near future will be about massaging traffic to relieve the core network, potentially offloading high-demand content - such as live sports video - to satellite broadcast networks, and making the most of pockets of high-bandwidth wireless.
As 5G evolves, it will not be about reserving chunks of spectrum for big-ticket auctions, but using frequency ranges much more opportunistically - the handsets and basestations of the future will sniff out uncongested spectrum based on the signals flying around them. Today's handset designers worry about the number of frequency bands they have to deal with for 4G. But this is driving a new wave of more flexible radios that will make it easier for operators to licence spectrum at a much more granular level. If a particular slab of UHF happens to be free in a city, it may make sense to put an offer in for that and tell phones to use it rather than wait for the government to parcel up much bigger lumps.
With more of the intelligence of how to use radio devolving to the edges, concepts such as centralised bandwidth auctions will simply become outdated. Maybe it will be like fifth-generation computing - anyone remember that? This next one may also become telecom's lost generation.
20nm process? What 20nm process?
7 February 2013 by Chris Edwards
Instead the focus was on the modified processes that will host these companies' attempts at putting finFETs - already in use by Intel - into production. There is a bit of a war of numbers on this but, essentially, the 14nm processes from Common Platform and the 16nm version offered by TSMC are 20nm technologies that can offer the new transistor type. The focus is clearly on those now as they should be able to offer significantly better performance and use design techniques to get the size down. But no-one was comparing 20nm against these future processes - the benchmark was the existing 28nm technology which people such as Samsung executive vice president KH Kim reckon will be a very long-lived node. The 20nm node looks to be one of those that will quietly fade into the background.
Thanks to a deal with STMicroelectronics, GlobalFoundries is even preparing a new 28nm process. The technology will provide customers with the option of mixing and matching more energy-efficient silicon-on-insulator (SOI) circuitry with traditional 'bulk' CMOS. The process makes it possible to put existing circuits onto the wafer by etching through the insulator layer, so that only performance-critical sections are redesigned to take advantage of SOI.
Mike Noonen, executive vice president of global sales and marketing at GlobalFoundries, said: "The back-end of the process is pretty much identical to [GlobalFoundries' existing] 28SLP platform. The ingenuity of this is to have this very think silicon channel over an insulator to offer dielectric isolation. You can really dial in optimum performance."
For those who don't want the risk of moving to finFETs and the added complexity of dealing with double-patterned lithography, processes such as the ST/GlobalFoundries 28nm provide a lifetime extension.
For those who do opt for finFETs, Noonen described how the 14nm process would deliver size and power reductions - the key driver of Moore's Law. The company used a design based on a dual Cortex-A9 processor from ARM to benchmark the process. Using ARM's Artisan libraries, Noonen said compared to the existing 28nm process, the 14nm technology could deliver a 62 per cent reduction in power for the same performance and would also be smaller. This was possible using a nine-track standard-cell library on the finFET-based process versus a twelve-track library for the bulk 28nm version. The fewer tracks you use for each cell, the smaller it usually works out. Alternatively, a 61 per cent improvement in performance is possible for a given power level.
"It shows what can be done with the finFET platform," said Noonen. "What we have built with Common Platform is a very SoC-friendly solution where you can dial in your required levels of performance."
Kim said Samsung's version of 14nm is nearing readiness with a number of multiproject wafer runs for test chips to be run this year.
What's new is old again
31 January 2013 by Chris Edwards
For what ought to be the most forward-looking section, the world of electronic instruments seems headed inexorably backwards right now. Much like the joke about how many blues guitarists it takes to change a lightbulb, everyone has been lamenting how bad it was for the old ones to die. Two of the best received products at the show - based on how much buzz they generated on the internet - were effectively designed in the 1970s.
The versions shown at NAMM sport some new features, but Korg's miniature MS-20 and the Moog Sub-Phatty are very much the product of an earlier age. They are resolutely analogue inside and that's the thing that has led both companies to come up with these new versions. The inspiration for the Sub-Phatty was the MiniMoog and it is by no means the first attempt by Moog Music to revisit its legendary synth, having already come up with the Voyager - which couples the MiniMoog with the blue LED styling of a pimped-out Ford Focus - and the Little Phatty.
Korg's new MS-20 carries the same look and feel as the original 1978 synth but has been given the bonsai treatment with miniature keys and patch cables based on 1/8in jack plugs rather than the 1/4in versions that have adorned the fronts of modular synths since the days of Tangerine Dream.
Why are these synths back? Curiously, because of digital processing. When digital synths such as the Yamaha DX7 and Korg M1 appeared, they swept the unreliable, mostly monophonic analogue dinosaurs out of the way. Ever since, musicians have thought they were missing something. Even the best attempts to emulate analogue hardware with digital synthesis have met with the aural equivalent of the Uncanny Valley. The sound is almost but not quite there according to proponents of analogue synthesis.
So, the secondhand value of the analogue dinosaurs has steadily risen to the point that synths considered to be pretty mediocre in their heyday now cost what they did new - and that's having adjusted for inflation. So, synth manufacturers have decided to get on the bandwagon, while it lasts, and revisit the glory days of their analogue product lines.
An industry cut back to the bone
22 January 2013 by Chris Edwards
"Macroeconomically, we are looking at a period of slower growth," says Penn, in the wake of the Lehman Brothers collapse that presaged the global slump and slowdown. World gross domestic product has been off by at least one per cent since the investment bank went under.
A long period of fudge, indecision and, in the US, brinkmanship games with the fiscal regime, has eroded confidence. "Confidence is just eroding. It's getting worse and worse," says Penn.
Because of this, governments are trying hard to make people spend money by pushing interest rates down and make saving seem worthless. Which is a bit of a problem in an environment where individual debt remains the biggest problem facing most developed economies. Encouraging people to keep high borrowing and spend their reserves is not exactly a good way to deleverage.
Manufacturers though are not helping the governments. Things that might have kickstarted technology spending have not gone so well. And Gartner's most recent numbers indicate semiconductor inventory building up through lack of demand.
"The launch of Windows 8 felt like a damp squib. The iPhone 5 didn't really take off," says Penn.
That may be just as well for the moment.
"Industry as far as we can tell has cut back really steeply. There is not a lot more to cut. The whole world from an industrial point of view is really running on empty. If there is a small uptick it's hard to see how people can cope with it. Our belief is that at some point in time there will be an general improvement in confidence and people will gain more comfort from the situation. When the market does start to rebound things will just fall over because the reserves aren't in there," Penn explains.
"Ten per cent of the world's GDP is dependent in some way on the chip," Penn claims. "I showed that to people in Japan and they said it was closer to 20 per cent."
The long-term trend line for semiconductors is very good, in terms of units. Sudden drops in price when vendors panic put a dent in the profitability and there are whipsaw movements in supply and demand looking back at the past 30 years for which detailed numbers are available. But the long-term trend line for unit demand over those years has remained more or less locked to 10 per cent per year, according to Penn.
"People react very violently to these changes. They don't react in a thoughtful way, they either go flat out or stop dead," Penn explains.
Thanks to the dodgy economic climate, chipmakers have cut back on capacity spending.
"I know what capacity is going to be in 2014. And it isn't going to change," says Penn. What people have bought or ordered so far in terms of chipmaking equipment has fixed what they are going to get in 2014 because it takes many months to become productive with that equipment.
"Capex book to bill is now running at 0.8," says Penn. For every dollar's worth of equipment they installed this month, chipmakers have only ordered 80 cents' worth for the future. It means capacity is not expanding as it should and could contract given the way in which older production kit winds up being taken offstream.
"Accumulate that over time and it tells you that capex spend is going down. We had an acceleration this time last year when confidence was higher. We had a splurge but that has been over for the past nine months. The current forecast is flat at best. Flat from a low number is not good news for an industry that has to grow at 10 per cent per year from a unit point of view," Penn says.
Trouble was postponed by a bad year in 2012 which Future Horizons had forecast as growing 8 per cent in value but which fell 2.4 per cent instead when the market turned south in June.
The question remains, says Penn, when the economy finally starts to recover. The company has forecast growth of close to 8 per cent this year on the basis that the economy should be more stable this year than last. "These aren't aggressive numbers," he notes.
A fair chunk of the growth could come from a reversal in price in a market that is used to deflation. "If capacity is really tight as it looks as it could be, prices could go up and go up quite rapidly," says Penn.
"If 2013 is the year when the economy turns, you better hang on to your hats in 2014. Because we haven't invested. 2014 is already starved of capacity. That will only get worse over the next six months," Penn claims. "The capacity isn't there to feed it."
The not so smart hub
9 January 2013 by Chris Edwards
This year's Consumer Electronics Show shows the same kind of ambition as companies such as Samsung tout their idea of TV as the 'smart hub'. Why remains something of a mystery. Because families tend to congregate around a TV companies have this idea that if they make the device somehow 'smarter' it will open up some massive new market, a mainline to the consumer's wallet.
The trouble for the manufacturers is that the number of people who want their lounge to be decked out like the USS Enterprise is mercifully small and predominantly male. If you're looking at the TV as a giant iPad, which is what seems to be the current fashion among product designers, then the only conclusion is that you live alone. In which case, a computer with a big screen attached to it is probably going to work as well as one of these smart hubs.
As soon as you have two or more people in the room, you have the issue of what you do with the big-screen machine that doesn't already have an answer. The list is pretty short: watching TV as a group; playing games with, hopefully, more than one controller. That's about it. Yes, you could have people shop and surf while watching TV but they don't need the big screen to do that. People are sitting around already with a combination of phones, tablets and laptops. They really don't need the TV to get in on the act unless manufacturers take a step back and look at what the existing usage models are.
There are things that the TV makers could do to improve the way in which the device interacts with these other ones but the reality is that there is so little money in it that they will not do it. There is very little that is sexy in a decent amount of protocol support to make it easy to transfer data between devices and actually take on the proper role of a 'hub'. There is too much vested interest to make such a hub open enough to be useful and, unless a smaller player decided to put an extensible operating system into their TV or PVR, very little is going to happen.
A different kind of doping
17 December 2012 by Chris Edwards
At the recent International Electron Device Meeting (IEDM), researchers from Mears Technologies and the University of California at Berkeley reported on their modelling work on a technique they believe could be more effective than strain at the 14nm node.
The idea centres on the idea of using a 'superlattice' structure to break up the silicon crystal into layers, almost like mica or slate. Insertions into the silicon lattice break up the periodic structure vertically but they maintain the single crystal laterally.
The element that UC Berkeley and Mears have been working with to act as the lattice breaker is oxygen, putting down atoms at reasonably regular intervals and with much tighter control over the layering compared with traditional doping techniques. However, it looks as though the engineers have not had to resort to the tightly controlled atomic layer deposition techniques that Intel brought in with its high-k metal gate structure for the 45nm process.
The superlattice growth technique apparently uses a standard chemical vapour deposition technique to put down the layers in sequence, controlling the concentration of oxygen to determine which layers are silicon and which are the 'break up' layers.
The technique looks as though it will go hand-in-hand with the existing technique of strained silicon, using different crystal structures around the silicon channel itself to stretch the internal bonds and, as a result, make it easier for electrons to move through.
However, the two methods overlap in their effect, you get a diminishing return on using both. The technique would let chipmakers trade one off against the other and the researchers believe that oxygen insertion can offer better performance at and below 14nm than strain.
Disaggregation goes into reverse
30 November 2012 by Chris Edwards
Doing a count-up of the number of chipmakers with serious plans for 4G-capable handsets, Jeremy Hendy, vice president of sales and marketing at Nujira, reckons there about 20 involved. This is in a market where the cost of development is immense, demanding teams of 200 engineers or more. This development effort is one of the reasons why even major players such as ST-Ericsson can lose a lot of money.
In those 20 or so companies, roughly ten are traditional players, five are new entrants from Asia and the remainder come from systems houses who have decided that buying off-the-shelf 4G modems is not for them. One reason is cost. In such a high-volume business the immense development costs fade just as long as you can maintain a high-enough market share.
A reason that's more problematic for the third-party chipmakers is the issue of control and flexibility. A number of companies want development inhouse because they don't want to be hostage to the major suppliers. As Hendy explains, if they have a problem in a particular country because of the way the network is configured - and no two advanced networks are the same - if they go to one of the big names, they risk having their problem lost in a queue of bug-fix requests that might take weeks or months to receive any attention.
Beyond the problem of bug fixes, there is also the question of how a phone maker gets the features it wants into the silicon it uses. The problem for anyone working in this sector is that the software is becoming remarkably inflexible. Complexity limits the degree to which any vendor will support tuning of the software to fit different requirements. Despite the software being there to make it possible to easily make changes to what is a highly programmable engine, the problems of verification have led the software to become practically fossilised inside the flash memory.
Is this reaggregation a long-term trend? Probably not. For one, companies tend to invest in inhouse silicon only to rediscover how expensive projects can become, especially if the project fails and they wind up buying the third-party option anyway.
As the core of the problem lies in the software rather than the hardware, one or more of the players might decide to open up their software in a bid to capture market share from a leader like Qualcomm. One way might be to work on ways in which they can make the software configurable or scriptable to provide greater control without having to unlock the core firmware, or simply opt for turning source code over to customers.
Paul Otellini's good timing
26 November 2012 by Chris Edwards
Otellini is in a reasonable position in terms of history. Although he can hardly claim to have placed Intel at the head of the semiconductor market - that happened well before he took over the lead role at the company - it is at least still there. Now Samsung, despite being sued at every turn by its own customer, has benefited from a surge powered by Apple somewhat more than Intel has. Apple's most important products are not the computers that employ Intel's processors, but the handheld devices that use an ARM core instead.
ARM's own dominance of the mobile market, which is now built around slick devices with fluid interfaces, makes the 'Wintel' hegemony upon which Intel built its empire look somewhat stale. Windows 8 was meant to reinvigorate the Microsoft operating system. Although a version of Windows 8 runs on ARM, the main beneficiary of the improvement would surely be the x86 architecture and Intel with it.
Instead, Windows 8 seems to be attracting the curse of Microsoft innovation: looks slick on the surface but is, in reality, a bit annoying. Clippy, Windows Me, Vista. That a company can do this with some regularity and get through is a testament to the power of the installed base and that, although there is a lot you can do on a tablet or mobile phone, there are times when you absolutely, positively have to unleash a proper computer on the problem. Accept no substitutes. Yes, sales of tablets and mobiles will exceed those of PCs, but there is still plenty of room for PC-class hardware.
Linux in its many forms offers some further comfort to Intel. Although ARM would dearly love to carve out a niche in Linux servers in the cloud, Intel still has very much the upper hand here. ARM's penetration into this market relies on it being able to get low-power 64bit silicon into place before Intel finally works out how to do that on its own architecture. It has the technology to do it. The question is whether the company is pushing hard enough to get it done.
The company is leaving the period during which the main aim was to keep the main x86 architecture business ticking along and not do anything to upset that. An incoming CEO has to question where Intel's future advantage lies. The company needs to make a future based on the x86 less threatening and better value for its customer base if it is to ensure continued sales and use its massive capital resources to maintain an advantage in different ways.
Silicon manufacture is an expensive business to keep running - this gives large incumbents a natural advantage. The future architectural challenge, because of power consumption, will lie not in the processor but how it deals with memory and I/O. That may mean conceding that the processor architecture itself is far less important. But, if you were the incoming CEO of a company with a highly profitable business built on architectural lock-in, would you give that up?
Airlifted from the burning platform
16 November 2012 by Chris Edwards
Tommi Laitinen, senior vice president of new products and international product business at Digia points out that, even you are standing on a burning platform you don't have to jump into the water without a lifebelt. Digia stepped in to airlift one of Nokia's former operations out of immediate danger and it will provide technology that could provide another one of Nokia's castoffs to get a foothold. Research In Motion agrees. The Canadian maker of the Blackberry sees Qt technology that was bought by Digia as instrumental in revitalising its own software platform.
Matthias Kalle Dalheimer, president and founder of Kdab, which is helping RIM to build Qt support into the Blackberry's underlying QNX operating system, said at the Qt Developers Conference this week: "RIM's going all in on Qt."
Qt might even provide Windows 8 Mobile with a better chance of market success although Noki
Qt's specialty is a cross-platform graphical software, a set of tools that makes it possible to write an application for one operating system and then retarget it to another with minimal changes. How minimal depends on what you want to port from where to where. Qt started out as a toolkit to map software between desktop operating systems such as Linux, OS X and Windows.
Tuukka Turunen, director of Qt R&D at Digia, says the desktop world was relatively straightforward to support in this way, although user-interface guidelines would differ: "Many of the desktop systems they have a screen, keyboard and buttons. Mobiles have completely different paradigms. If we made it completely automatic it would probably hinder more than help."
Right now, given the way in which phone makers are suing each other, those differences in user interface are clearly not enough. But we can expect them to diverge over time as companies attempt to differentiate. This will make the idea of cross-platform development harder but, for the software companies, the alternative is a lot more work or to restrict the number of platforms they support. This would make it harder for new players, such as Jolla - which is continuing the work started at Nokia on Meego - to break into the market.
Lars Knoll, Qt chief maintainer, says that using a framework such as Qt it is possible to do 90 per cent of an application in a portable way even if the user-interface styles of the target operating systems differ significantly. "Qt can help because it enforces a reasonably strong separation between the user interface and application code," says Knoll.
This is why companies see technology such as Qt as being instrumental. If you want the next Angry Birds on your device, it helps to make it easy to port it from the original. And developers get to attack a larger marketplace and a wider range of app stores.
"The whole ecosystem is moving towards platform-agnostic development," claims Laitinen.
Qt has some catching up to. As Symbian was for a long time the main target at Nokia, Qt does not yet have full support for Android and iOS. However, third-party ports exists and Digia reckons that with the release of version 5 of Qt, which is due by the end of the year, it will have a better architecture to support a variety of mobile targets. Android and iOS are clearly the most important right now. Being similarly less platform dependent would probably have suited Nokia better.
11 November 2012 by Chris Edwards
Apple has yet to stop sparring with Samsung but it seems only a matter of time before the various players making mobile phones decide to give up trying to kick each other out of the market using the power of courts and International Trade Commission (ITC) bans over comparatively small design elements.
The maker of the iPhone went after HTC over the way the user interface on the screen scrolls - the bounce when you hit the bottom of a list that's meant to replicate real-world physics - and for having the temerity to detect links and email addresses in emails. But are these the kind of features worth erasing from competitors' products when they might hold something against you? And at the high cost of pursuing litigation? Other parts of the industry decided against it.
Earlier in the week, a group of companies decided the last thing they wanted was for litigation to come from another direction: the patent troll. These are companies that are purely in the business of obtaining licence fees from patents they hold and have become feared throughout the technology world. Why? Because there is little you can do to hurt them other than have their patents invalidated. You can't try to get their products banned because they don't have any, other than the patents they have bought up.
In the past, patent trolls were mainly companies that found they didn't have the continuing capital to keep making products. But they had built up decent patent portfolios. Rodime made more money after it gave up making disk drives than it did as a manufacturer - by charging the companies that continued to make them.
ARM is among the companies that have formed Bridge Crossing, a consortium linked to Allied Security Trust, the organization that is buying the rights to the MIPS Technologies' patent portfolio. The portfolio includes 580 patents and patent applications covering microprocessor design, system-on-chip design and other related technology fields. The consortium will pay $350m in cash to acquire rights to the portfolio, of which ARM will contribute $167.5m. The consortium said will make licenses to the patent portfolio available to companies not within the consortium.
"ARM is a leading participant in this consortium which presents an opportunity for companies to neutralize any potential infringement risk from these patents in the further development of advanced embedded technology," said Warren East, CEO of ARM in a statement. "Litigation is expensive and time-consuming and, in this case, a collective approach with other major industry players was the best way to remove that risk."
That the portfolio is worth $350m simply as a collection of patents is a testament to how problematic patent trolls have become.
FuseTalk Standard Edition - © 1999-2013 FuseTalk Inc. All rights reserved.
"Africa is abundant with engineering opportunity. We look at some of the projects and the problems."
- DECC-EDF makes yet another attempt to fund 3rd Generation Nuclear at any cost [12:04 pm 25/05/13]
- UK just six hours from running out of gas in March [09:02 pm 24/05/13]
- Ideas for a final year university project [05:55 pm 24/05/13]
- Fourth Generation Nuclear: Molten Salt Reactors [10:39 am 24/05/13]
- LED bulb efficiency - its all about the drivers not the LEDs? [09:52 am 24/05/13]
Tune into our latest podcast