Artificial intelligence

The sustainable approach that will help avoid a third ‘AI winter’

Image credit: Jakub Jirsak/Dreamstime

Energy-hungry artificial intelligence systems are heading for a dead end. The solution is to design systems that work like the human brain, not an oversimplified version of it.

The majority of big artificial intelligence companies are pouring huge amounts of energy and resources into AI in the hope of creating a more efficient and automated future. However, throwing large volumes of data at machine-learning algorithms and using vast amounts of processing power is neither efficient nor futureproof. Algorithms were never developed with efficiency in mind, so focusing on this aspect is a vital step towards avoiding another ‘AI winter’.

The energy consumption required for mining and managing Bitcoin has been in the media spotlight for years now. The energy usage of crypto transactions has even been compared to that of countries the size of Greece, a country with a population of over 10 million people. The response of environmental organisations and the public on social media has been huge, and rightly so, especially when geopolitical tensions are creating uncertainties on global energy markets and the climate crisis demands immediate energy-saving measures.

This focus on the environmental impact of crypto currencies has served as a distraction from the negative impact the AI industry is having, which might be even greater. Research by MIT has shown that training a single AI model can emit as much carbon as five cars during their entire lifetimes. This calculation is based on one AI model only, whilst the MIT Technology Review article reporting the results cites a case study that a paper-worthy AI project required approximately 5,000 models. Imagine the climate impact this amasses. What’s more, as the scale and application of AI models grows exponentially, so does the computing power needed to deliver them. With that, the global race for computing power has become ever-fiercer, creating more and more server farms that consume huge amounts of power.

Yes, the AI industry has shown some great progress when it comes to performance of natural-language processing. OpenAI’s Generative Pre-Trained Transformer (GPT-3) and Google’s Bidirectional Encoder Representations from Transformers (BERT) models are just two of the brightest examples. However, these models were never built with efficiency in mind and it won’t be long until they meet a dead end.

The past 70 years have seen spikes in AI progress followed by a decline in interest and investment. In an age where sustainability is front and centre, alongside a growth in the understanding of the impact AI models have on climate, the next AI winter might be nearing if industry fails to act now.

There is one solution: a different, more efficient and sustainable approach to building and training AI models. Efforts should be focused on precision and automation, but also on efficiency and accuracy - in other words, genuine ‘intelligence’. This means designing machine-learning systems that work like the human brain and not just like an over-simplified version of it. Why? Because the brain, unlike existing machine-learning models, needs only around 20 watts of power to outperform any AI.

While we still haven’t cracked the code on all the complex mechanisms of the human brain, neuroscience has given us enough understanding to try to create a better AI. Neuroscience-based approaches to natural language understanding (NLU) like semantic folding are already delivering faster and more energy-efficient text classification than BERT, the AI behind Google Search.

By combining the latest semantic folding-based algorithms with dedicated high-performance hardware, we can speed up key AI processes like the processing of large volumes of text, while also reducing the computing resources needed to perform intelligent document processing at scale. Deployed in real-life applications, this new model would be able to perform hate-speech detection for nearly three billion Facebook users; filter the Twitter firehose in real time for hundreds of millions of users, and analyse and route tens of thousands of customer enquiries in support centres.

These are just some examples of what efficient AI models can achieve and we’ve only just brushed the surface of their potential. From optimising language understanding to tackling hate speech or improving speech recognition to help clinicians build patient records quickly, there are numerous areas for improvement that go beyond the way Alexa and Siri understand our requests.

For the industry to get there, AI needs to get much smarter and - by magnitudes - more efficient.

Francisco Webber is CEO and co-founder of

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles