The chip design conundrum: super-sophisticated or swift and simple?
Image credit: Rainer Plendl/Dreamstime
It may be time to reconsider the ‘bigger is better’ approach to manufacturing ever more complex silicon chips.
Advances in semiconductor technology have given us the sophisticated chips that are the heart of the vast array of high-tech devices we now use to run our lives, from smartphones and flat-panel televisions to games consoles and increasingly intelligent cars. Perhaps more crucially, they have also played a central role in recent advances in healthcare, including sophisticated surgical robots and artificial intelligence (AI) devices that assist medical professionals in the diagnosis of cancer and other conditions.
The silicon chips used in these applications often cost $100 million or more to develop and take years from concept to production. To understand why, we need to consider the technology that underpins them: the production process starts with an expensive, highly refined wafer of crystalline silicon, then modifies the material to create the necessary characteristics for semiconductor device building blocks such as transistors, before adding further material layers to interconnect these devices for a complete integrated circuit.
Over the past 50 years, Moore’s Law has driven progressive reductions in minimum feature size from 10µm to under 10nm. Devices a thousand times small than their predecessors can deliver proportionally higher performance and greater circuit complexity, but require increasingly sophisticated and expensive machinery to make them. In addition, wafer size and the number of layers used has increased, all of which means more complex manufacturing and therefore higher costs.
This has led to enormous ‘mega-fabs’ capable of achieving the necessary economies of scale to make these highly integrated chips, as well as extremely long production cycle times – typically between three and nine months. This in turn creates pressure to pack as much functionality into the chips as possible (since it’s hard to predict everything that might be needed by the time products eventually reach the market) and requires large teams to work on design, simulation and test, all contributing to the hefty development cost.
The silicon paradigm is ‘big is beautiful’ – big development teams, big chip designs, and big fabs to produce them. But is bigger always better?
There are several problems with this paradigm. First, not every application requires supercomputing power; often simpler functionality is sufficient. Secondly, many use cases cannot support the high development and production costs of silicon chips. And thirdly, as has been highlighted recently, the high capital costs and long lead times of the conventional semiconductor industry make it prone to significant imbalances between supply and demand.
A good example of this can be seen in radio frequency identification (RFID). In the apparel market, RFID has been proven to decrease overstocking and improve both supply chain efficiency and top-line sales. It is already deployed on more than 10 billion items every year, but silicon RFID chips are just too expensive to be used on mass-market everyday items such as food and beverages, home and personal care products, or pharmaceuticals. If we could extend RFID use cases to these verticals, the benefits would be huge – reduced food waste, improved recycling of packaging, and better health outcomes, just for a start.
The potential market size of each of this and other applications is measured in the trillions – but only if the price is right. No matter how exciting the potential, they do not make sense at the cost of today’s silicon-based electronics.
We need a different way of making electronics, one that is fundamentally lower in development and unit costs, that delivers devices that can be seamlessly embedded into a product or packaging. And the designs need to include sufficient functionality, not be over-engineered. Given how the silicon world has evolved, it is hard to see how a leading-edge silicon mega-fab would ever be able to meet these requirements. Nor how it would make economic sense in Britain (even if supported by significant government subsidy).
With a novel, modular manufacturing system, production can be distributed close to the point of use, reducing the transportation element of the devices which is a significant contributor to their carbon footprint. Hitting these important criteria will open up the market for trillions of connected smart objects that will positively impact some of the biggest global problems we are facing today, including climate action, circular economy, sustainable agriculture and ubiquitous healthcare.
Scott White is CEO of PragmatIC Semiconductor.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.