Artificial intelligence for business

Where should digital engineers focus their generative AI efforts?

Image credit: BancoBlue/Dreamstime

Businesses may feel they can’t afford to be left behind in the race to adopt artificial intelligence tools, but how do they identify the areas worth investing in at a time when resources are already stretched?

Artificial intelligence (AI) has had a game-changing impact on industries far and wide, but we’re still only scratching the surface of what’s truly possible. Nowhere is that statement more relevant than in the field of digital engineering, where leaders are now facing tough decisions on where to focus their resources to best navigate the evolving AI landscape.

Generative AI, natural language processing and tools like ChatGPT have become boardroom buzzwords, but investment in one area might lead to underinvestment in another. At a time when resources are stretched and opportunities are plenty, where should digital engineering companies be focusing their resources?

AI is ubiquitous, but by most counts it’s still a nascent technology. We may have reached the point where the automation of processes has become normalised and machine-learning models are commonplace, but there are acres of uncharted ground left to explore. Data visualisation, for example, uses AI algorithms to create images from data that humans can understand and respond to more effectively, and it’s chronically underused in the data engineering space.

Emerging technologies that were once on the distant horizon, such as generative pre-trained language models like ChatGPT, are now firmly within grasp, but there is no consensus or blueprint as to how they can be best adapted for specific goals. It’s down to the current crop of data scientists and engineers to assess these technologies and carve a path forward with them, seizing the many advantages on offer without exposing themselves to too much risk.

Although AI has been applied in data engineering for some time, recent developments in large language models such as ChatGPT have taken it to a new level of sophistication. This emerging field of generative AI has the potential to carry out a range of tasks that were once dependent on manual input, from code generation and language processing to bug detection, testing and documentation.

Leveraging these tools is a sure-fire way to boost digital transformation and meet the growing demand for products and services head-on, but it must be done in a way that preserves trust and leaves organisations prepared for the knock-on effects. The application of generative AI solutions has obvious benefits, but will also have consequences, such as how it impacts other processes, how it shapes policies and personnel, and how data security is handled. These are the issues many leaders in the digital engineering space are having to grapple with today.

As with every new technological innovation, regulators lag far behind the curve. This presents a unique opportunity for data engineers – the race for credibility has begun. By ensuring that their AI models are transparent and trustworthy, they can take the lead in shaping the technology and our general adoption of it. This might involve creating a framework of policies that inspire trust, such as Gartner’s AI TRiSM, an approach which combines trust, risk and security management when embarking on AI adoption.

One of the central themes around trust in AI centres on ethics and bias. How can we be sure that the algorithms and models being built aren’t echoing the biases of the people that created them, or that the datasets used aren’t inherently skewed? If the purpose of generative AI is to formulate best courses of action and facilitate decisions – or even make decisions on its own – there needs to be a framework in place that ensures these processes are ethical and unbiased, or at least moderated by human actors. This is where data engineers should be focusing their efforts if they want to get ahead of the AI curve and leverage the technology to its fullest potential.

The path to unearthing value in AI lies not just in its wholesale adoption, but in the governance of that adoption and the methods surrounding it. In terms of where to invest, engineers should be focused on implementing tried and tested use cases, but also experimenting with new ideas in a ‘safe’ development space where mistakes can be made and learned from.

At a time when talent is in short supply, rallying a small team of individuals to try and test new applications is the best path toward harnessing AI. In doing so, businesses will be able to hedge their bets, invest wisely and target their resources where the greatest value can be gained, becoming torchbearers for generative AI and the complex processes involved in its deployment.

Vamsi Kora is chief data strategy officer at Apexon.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles