Artificial intelligence light bulb concept

News from the global AI hothouse

Image credit: Blackboard373/Dreamstime

A review of progress on artificial intelligence research reveals machine learning is improving satisfactorily, but really is about to get a whole lot better.

The need to train machines more quickly, more efficiently and less expensively is a pressing imperative. AI-powered technology, dependent on machine learning, has limitless potential to propel business and benefit society. Progressive improvements in training will bring incalculable rewards, if only we could imbue machines with a childlike curiosity that encourages them to learn more naturally, more intuitively and more effectively. Perhaps we can.

Like many businesses and academic institutions, our team at Cambridge Consultants invests in extracurricular research to advance machine learning, using algorithms and neural-network models to progressively improve the performance of computer systems. Many see meta learning – essentially learning to learn – as the ultimate objective. With that in mind, I’d like to share my progress report on the very latest academic and practical developments from the global AI hothouse.

First let’s recap on the headway made so far. The fundamental challenge in machine learning is that a machine starts with a tabula rasa, or clean slate. It comes into the world essentially as if it was born yesterday. We create sophisticated systems, usually well over-specified, that are capable of more than we ask of them. But to achieve more, they need to be exposed to many tens of thousands, if not hundreds of thousands, of training examples for every task.

Compare that to a child, who can use what they’ve already learned. They can immediately ‘get’ something by drawing on building blocks of learning. In simple terms, they learn naturally. But in technology, each system is trained from scratch, on a single task or single data set. One way to get machines learning more naturally is to help them learn from limited data using approaches like generative adversarial networks (GANs). The idea is to generate data from your own training sets rather than going out into the real world.

The ‘adversarial’ bit comes from the process of pitting one neural network against another to generate new synthetic data. Subsets include synthetic data rendering – using gaming engines or computer graphics to render useful data. Another approach, domain adaption, involves transferable knowledge (using data in summer that you have collected in winter, for example). A third, few-shot learning, is all about making predictions from a limited number of samples.

Other approaches take a different limited-data route. Multi-task is a fascinating one where commonalities and differences are exploited to solve multiple tasks simultaneously. Although machine learning is mostly supervised – with input and target pairing labels – progress is also being made in unsupervised, semi-supervised and self-supervised learning. This is all about learning without having all data labelled. Clustering is a good example: an algorithm might cluster things into groups with similarities that may or may not be labelled, but a subsequent examination of the clusters will reveal the system’s thinking.

Everything I’ve described so far is bringing incremental advances. But is it taking us close to a great leaping-off point into meta learning? Well, I’m yet to see any evidence of it emerging from the lab into consciousness and usefulness. But that’s not to say there aren’t some thrilling new developments bubbling to the surface. Let me whet your appetite with two: the transformer architecture and closed-loop experimentation.

For me, the emergence of the transformer provides clear parallels to the ‘learning to learn’ meta concept. Most neural-network algorithms are specifically designed to perform one job. Here we are talking about an architecture that can be tuned and turned to different tasks – similar to the notion of machines exploiting building blocks of learning. The transformer initially used self-attention mechanisms as the building block for machine translation in natural language processing. But now it is being applied to other tasks, like image recognition and 3D point cloud understanding. Trust me, this is exciting, ground-breaking stuff.

The second up-and-comer to watch out for is the process of training AI models in an experimentation loop. Without explaining all the complexities, essentially it turns the traditional data-first approach on its head. Rather than asking, “We’ve got all this data, what will it solve?” the idea is to start with the problem and then create the data sets you need. This is happening right now in drug discovery. You get the AI to say what it would like to know, then you run an experiment in the lab to find missing pieces of information that you feed back into the neural network so it can fill in its knowledge gaps. Steps are afoot to close the loop and automate the whole process. Again, amazing progress as machines continue to learn how to learn.

Tim Ensor is director of artificial intelligence at Cambridge Consultants, part of Capgemini Invent.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles