Welcome Your IET account
Shiny blue brain
Comment

European researchers look beyond deep learning

Image credit: Dreamstime

An EU network aims to investigate what lies outside the current fashion in AI.

It's not hard to see the reasons why deep learning became the flavour of the decade in artificial intelligence (AI). Just a couple of years after the first papers on deep learning appeared in 2006, a team at Stanford University showed how you could get a 70-fold speed boost over a regular dual-core Intel x86 processor by compiling the algorithm for the graphics processing unit (GPU) sitting next to it.

Since then, deep learning has powered its way through a series of benchmarks. In the field of natural language processing (NLP), the team behind the GLUE benchmark suite has produced a more stringent version called SuperGLUE because systems that have adopted a second layer of deep learning were so successful at beating the original.

But deep learning far from encompasses the entirety of what needs to happen in AI to make applications beyond the realms of search and social media happen, says Holger Hoos, professor of machine learning at the University of Leiden and co-founder of the AI-focused CLAIRE research network.

“Deep learning is a very exciting topic. It would be a big mistake not to invest in this,” says Hoos. “But if you look at the impact AI technology is having, it’s not the case that it’s all driven by deep learning. Machine learning is much richer than where we’ve seen recent breakthroughs. If you work with only deep learning, you can’t do cutting-edge robotics. When you consider how important robotics is in Europe, work on deep learning is not sufficient.

“It is not clear that these techniques will be responsible for big breakthroughs in more challenging scenarios. And it is not clear we should bet big euros on just this area.”

Robotics does not just mean machinery at work in factories but the guidance systems going into drones and, eventually, self-driving cars. Some researchers into robot navigation, for example, are keen to distinguish between the work they may do on neural networks from what turned into deep learning. An example is the RatSlam engine developed at the Queensland University of Technology, which as open source has been used in many experiments around the world. It has a neural model at its core, but it’s based on a model that is closer to the kinds of cells found in mammalian brains rather than the highly abstract model applied to deep learning where scale plays a major part in the pipeline’s ability to identify patterns in data. The RatSlam model lends itself to systems that need to plan trajectories based on what they have seen before and with a lot less training than the typical deep-learning system.

The sheer quantity of data that the models need to learn in deep learning is one of its more serious weaknesses. The more complex the model the more data they tend to need. This is fine if you are a social-media company able to ingest gigabytes of new input every day. But useful training data is far less forthcoming in most other fields. That is particularly true of autonomous vehicles. Most of the data you really want to train a system on occurs very rarely in real life, which is lucky: otherwise driving would be a far more risky endeavour than it already is.

One option is to use synthetic data to generate what you want. This is an approach recommended by another CLAIRE co-founder, Philipp Slusallek of the University of Saarland and scientific director of the German Research Centre for Artificial Intelligence (DFKI). Rather than try to capture real-life incidents, you create them in a virtual space and play back the simulations to the AI model. This, in principle, lets you set up many different collision and near-miss scenarios that you cannot perform in the real world. The automotive industry is used to the idea already in the form of hardware-in-the-loop simulation used for testing powertrain and engine-management systems. However, there is still significant overhead in creating the simulated data as well as the concern that it might not accurately reflect real-world conditions.

Medicine represents a different situation, and Hoos sees this area as being an important one for machine learning but one where data is often severely limited. He points to the situation with rare diseases. “They devastate people’s lives but you will never have much data on an individual condition. That’s a major challenge for machine learning in general.”

Synthesising data may work in an environment such as self-driving vehicles, where it is time-consuming but not difficult for humans to check whether the scenarios are realistic. In medicine, it is hard to envisage ways in which synthetic data would be reliable. The need is for machine learning systems that can make good decisions made on extremely limited data, which points to techniques other than classic deep learning. It may be possible, for example, to use transfer learning, where a system trained on generic conditions uses the data for rare conditions to specialise.

Another problem of deep learning is demonstrated by its fallibility. It is very easy to construct images that look the same to humans but cause the neural networks to completely miscategorise them. To many deep-learning systems, seeing a patch of fur is as good as seeing a complete cat because of the way they tend to home in on fine details rather than the structures and shapes that humans perceive. The question is whether the solutions to these kinds of problem can be built into deep learning or whether AI that has a better understanding of what it is seeing needs input from many other technologies.

“It’s clear that successful applications will combine methods from many areas of AI. At CLAIRE, we expect that it’s important to deal with all of this,” Hoos says.

CLAIRE itself is one of many initiatives around the world intended to work on AI, but with an agenda to look at many more options that deep learning and take into account everything from self-learning systems to the kinds of numerical optimisation that typifies the other end of machine learning.

Rather than create a research institute like nearby Imec or CERN in Switzerland, CLAIRE was conceived as a network of researchers centred on Europe but with associations beyond the EU and with pretty ambitious targets in terms of how much work its teams will undertake. “We hope that the funding that we round up will be on the order of that of CERN,” say Hoos.

“European governments and EU have made the mind up to invest in a major way in AI. The risk is that could be very thinly distributed and not reach critical mass. If we don’t reach critical mass and co-ordinate we won’t be able to compete globally.”

The network has funding from a number of local governments and is looking to get more from industry. One attraction for industrial users is that the work could help close the skills gaps in AI, an area where most of the applications need to be driven by experts in the field.

“Part of my research is automated machine learning. That does not just allow machine-learning experts to do better with their algorithms but, more importantly, it enables people with limited expertise to get results,” says Hoos.

The idea behind this area is to find models that suit a particular application and, potentially, even choose the most appropriate type of model, whether it comes from the world of deep learning or one of the many other fields of AI.

“There is a growth imbalance between supply and demand: there is not enough [AI] talent to go around especially in the public sector. Automated machine learning is incredibly important because it amplifies the talent we have available. And do that in a responsible way. An example is in statistical tests, something my group [at Leiden] has started working on. This watches over a machine-learning system and signals when it is out of its depth and beyond what we are comfortable with here. The sharing and reasonable enforcement of best practice can be greatly aided by these auto-AI systems.

“However, I don’t think it will save the talent bottleneck on its own. We need much more investment in AI overall.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them