Sponsored

Trends, innovations, and human impacts of artificial intelligence

BAE Systems and MathWorks are working together to understand and solve the engineering challenges of the future, whether that is in the development of a Future Combat Air System, the Navy’s next submarine, or in the application of Digital Twin technologies and DevSecOps methodologies. E&T Technology Editor Tim Fryer, speaks to an expert panel of Jos Martin, (Director of Engineering for Parallel Computing and Cloud Platform Integration products at MathWorks) and Professor Nick Colosimo (AI & Autonomy Technical Specialist, and Lead Engineer Future Combat Air System (FCAS) for Technology at BAE Systems), to discuss AI aspects from trusting AI and workforce requirements to information exchange, current trends and what the future might hold.

Left: Prof. Nick Colosimo, BAE Systems - AI & Autonomy Technical Specialist, and Lead Engineer Future Combat Air System (FCAS) for Technology. Right: Jos Martin, MathWorks - Director of Engineering for Parallel Computing and Cloud Platform Integration

How MathWorks and BAE Systems work together

Jos, Nick, thank you for joining me. I wondered if maybe Nick you could give us a brief introduction to how MathWorks and BAE Systems are working together?

Nick Colosimo

BAE Systems and MathWorks are working together to understand and solve the engineering challenges of the future, whether that is in the development of a Future Combat Air System, the Navy’s next submarine, or in the application of Digital Twin technologies and DevSecOps methodologies.

MathWorks products are deployed across almost the whole breadth of engineering activities within BAE Systems, from major development tasks such as early concept design assessment and autocode generation of safety-critical flight control software, to the detailed design of systems using physics and behaviour models, and enhancing operational support and logistics activities with data analytics and machine learning capabilities.

With approximately 2000 users of MATLAB and Simulink working in air, land, sea and subsea domains, BAE Systems has a strong collaboration with MathWorks; and by providing access to consultants, developers, training and on-demand licensing, this equips our engineers with the tools, skills and knowledge to meet our customers’ needs. Our user base is also supported by an award-winning community of passionate individuals within the business, who promote sharing and learning through webinars and workshops, and with community challenges around coding or homebrew electronic systems.

Building trust in AI

There are a lot of perceptual issues with AI about how much we can trust, how much we can control it – Is this likely to be a key issue and how can we address it?

Nick Colosimo

When it comes to AI, we are naturally conservative in the area of embedded systems and safety-critical tasks. From a user perspective, you can define trust in three terms:

  • reliability
  • predictability
  • explainability

When we get onto a plane, we don’t usually understand how the plane itself works; however, we still trust it because of experiences, familiarity, and statistics. We see the reliability and the predictability of getting on an aeroplane and therefore trust an otherwise complex machine.

The challenge is when you present a user with a new system, such as an AI system, for which the data on reliability and predictability is either not available, or not in a form that can be readily accessed or understood. It’s important to recognise that whilst industry will have gone to extremes to ensure that the system does exactly what it claims with respect to safety, reliability, and predictability, it still needs to be proven to gain trust. Depending on the context, you may not need all three terms at once, but in their absence, people are far more risk-averse when it comes to adopting new technologies. 

Subsequently, there are several hurdles to overcome, not least the sheer volume and repetitive nature of the data required. As a human brain develops, it is trained on an enormous amount of information. For example, large amounts of image data are passed through a human brain to train it to recognize what a giraffe looks like although it may not feel like it at the time. However, far less data is actually required compared to say a Deep Learning Neural Network like a Convolutional Neural Network that can recognise objects in images. Neuroscientists would refer to this impressive feat as the feature-binding problem, something we see in primates, of which there are not yet artificial intelligence solutions for, but some glimmers of hope are emerging.

At present, when an autonomous car is driving around on the road safely, how can you be certain it can tell the difference between a pedestrian, an advertising sign with an image of a human on it, and or a plastic bag blowing around in the road? This is as much about sensing modalities as it is the smart algorithms that are behind it all. The data acquisition from devices like RADAR and LiDAR must be matched to ensure there is a distinguishable difference from the perspective of the system. We can call this ‘discriminating sensing’. If we can formulate solutions that leverage an appropriate array of sensors to make up for the gap between the realities of current AI and the human brain in terms of image classification, we are much likely to see solutions that have a true human equivalence. That is to say, true equivalence in terms of classification accuracy, robustness, and the amount of algorithm training data required, to be a good human driver. But a good human driver may not be enough, and that human driver equivalence may often need to be a professional driver yardstick if the economic benefits of vehicle autonomy are to be realised.

Jos Martin

I think that the explainability of general AI models will be key to user adoption and building trust in them. The problem right now is that very few people can explain how these AI systems work in a way that can be easily understood.

Certainly, we’re seeing product designers, scientists, and engineers wanting more tooling in the explainability realm, but few people can explain how AI systems work, and that number needs to grow for AI to be trusted. Explainability frameworks are needed so that the vast numbers of AI models that we see can have reliability, predictability, robustness, and explainability as Nick suggests.  Once you have these frameworks, users will have a reasonable way of estimating how good their AI models are, a practical way of knowing the effectiveness of AI, and under what circumstances it will work.  At which point, users will then be comfortable with using AI models and we will start to see more general trust in the use of AI models and systems. However, further development is still needed in these areas so that the new innovations in AI can have reliability, predictability, robustness, and explainability.

A vision for the future – Tempest, Future Combat Air System. Credit BAE Systems.

Current trends

There’s a lot of positivity about the possibilities and current trends of AI at the moment, can you elaborate on where are we at, and where are we going?

Nick Colosimo

I think everyone would agree that the machines are getting better, they’re getting smarter. When we talk about artificial intelligence, we often encompass a broad spectrum of techniques but most of the time people are really talking about machine learning and the subset of it which is deep learning. Many of the AI advancements that we’ve seen recently have been in the deep learning field. That’s been possible because we’ve had much more affordable processing power, more data and useful tools like MATLAB and the Anaconda suite. Advancements that have been made through deep learning have proven surprisingly successful in machine vision and playing combinatorial games such as Go, an exceptionally complex game.

At the moment, MathWorks products are deployed across almost the whole breadth of engineering activities within BAE Systems, from major development tasks such as early concept design assessment and auto code generation of safety-critical flight control software to the detailed design of systems using physics and behaviour models, and enhancing operational support and logistics activities with data analytics and machine learning capabilities. My colleagues and I from BAE Systems recently conducted some collaborative work with Cranfield University to look at when a jet engine might fail and how to reliably detect and classify small UAVs. We used MATLAB for much of this work. It is amazing what you can pick up in a couple of hours through the Onramps online, which, even with just a basic understanding of artificial intelligence, you can become a reasonably competent user in a weekend.

Jos Martin

One of the core reasons why deep learning has kicked off in the last five years or so is because there have been complete toolchains where the designer can work in their normal mode of operation all through the AI workflow. They know what their data looks like, they ingest their data, they label the ground truth in interesting ways and they then run it through AI systems training. Those AI systems can also be trained on high-performance computing or GPU systems that can then generate networks or AI systems that can be put into the more standard control system for use in cars, aeroplanes, finance, etc.

What’s next?

What should we be expecting in the future? Are there any advancements in AI which particularly excite you or that you predict to see in the not-too-distant future?

Jos Martin

In looking at what MathWorks has done with AI, and what our customers have done with it, almost all of the activity is something we couldn’t do before. AI is increasing productivity by enabling us to do things we didn’t know how to do rather than doing something we already know to do more efficiently. Productivity comes from completing previously unachievable tasks rather than making existing tasks more efficient. What I expect to happen is the way people approach their work will change.

The only way that general systems can be stable is if they have a lot of information content intrinsic to their behaviour. However, AI systems are generally not trained on enough data to give them as much information content as they need – this is why the AI system sometimes fails in unpredictable ways. Usually, the reason for this lack of data during training comes down to the cost/availability of data, rather than a specific choice by the designer. In the future, I expect that instead of providing raw data to training, many AI systems will be trained using other AI systems which have significantly larger amounts of information content. This would allow potentially more information to flow into the training of a given system and provide traceability and provenance of the original data used to generate many smaller AI models.

Nick Colosimo

I think AI is really a tool. There are not only products that contain AI but also process improvements that you can make using AI. In other words, creating market differentiating product features and doing the engineering better (time, cost, quality).

These systems, while becoming increasingly opaque and black box in nature, require increasingly large amounts of data to perform the training to achieve this, or if it’s a real deep reinforcement learning algorithm, many, many test runs, in order to perfect a solution. However, they’re not particularly robust yet. This has been exposed by something called generative adversarial networks, which have been shown to expose deep learning image classifiers as not really picking out the features (and relating them to one another in a holistic geometric manner) that we would associate with why one thing is an ostrich versus a school bus, for example.

Moving forward, we can expect much more robust, explainable artificial intelligence that requires less training data, utilising what we might refer to as one-shot learning. This would require novel types of AI algorithm ensembles, where we first take some larger logic and automated reasoning knowledge-based systems, combining them with neural networks or decision trees to get the best out of each of the worlds from each of these different aspects of artificial intelligence.

Now, to catalyse all of this, there is going to be a much greater need for computational power because the amount of computing power required for training seems to be going up exponentially. According to some accounts it’s going up at a higher rate than Moore’s Law (think of this as the general computational power increase trend) so, we’re going to need, not only new software architectures but new hardware devices like neuromorphic computing, quantum computing, for example. We’re going to have to invest if we want our artificial intelligence to be at the cutting edge. The only way we’re going to get that is through both investment and greater leverage of new types of hardware type devices. Not to mention continuing to leverage from partners and their state-of-the-art development tools. I think these are necessary catalysts to see the next breakthroughs in artificial intelligence.

For more information on MathWorks and BAE Systems you can visit:

Learn about Artificial Intelligence

Read more about how BAE Systems is investing in and developing technologies

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles