EU GDPR data security concept
Comment

Building digital trust will be essential to adoption of AI tools

Image credit: Christof Prenninger/Dreamstime

A culture of data sharing between governments, tech giants, start-ups and consumers is a critical element of creating artificial-intelligence applications. Regulatory intervention needs to be carefully balanced so that it doesn’t stifle innovation.

The Covid-19 pandemic has caused a sea-change in public attitudes to data sharing. Prior to the outbreak, governments were not the most trusted entities with which individuals were willing to share their data. Yet, facing an unprecedented health crisis and wanting to play their part in the national response, citizens across the world have willingly given their information to government test, trace and isolate programmes.

Success in fighting the pandemic hasn’t been evenly distributed, for many reasons. However, some of the most successful countries in Europe and Asia Pacific have something in common – a commitment to data-protection standards. The General Data Protection Regulation (GDPR) may have encouraged a culture of confidence in data sharing for Europe as well as its trading partners.

This offers a useful perspective to organisations working in the domain of artificial intelligence (AI). The willingness to share data with or among businesses, as well as governments, depends on trust and the expectation of reward. To part with their data, individuals, businesses and governments need to expect something valuable in return as well as to be reassured the data will be protected.

This trend towards increased data sharing can be encouraged by a coherent framework for trusted data use and responsible AI. Such a framework can address data quality, transparency and accountability as well as people’s expectation of control over their data.

Europe is moving towards a data-agile economy. The region is seeking to address many of the weaknesses that have limited the competitiveness of European companies, most notably the lack of access to large quantities of high-quality data. This is an integral asset in the race to develop powerful AI solutions that can greatly enhance business insight and efficiency.

In the frame of the recently adopted European Data Strategy, the European Union will propose a Data Act in 2021 that will aim to foster business-to-government data sharing for the public interest as well as to support business-to-business data sharing. The aspiration is to create a genuine single market for data and common data pools that organisations can tap for growth and innovation.

Core to the strategy remains a continued respect for citizens’ rights and freedoms. Consistent with Europe’s stance on the protection of fundamental rights, including privacy, the new data ecosystem is unlikely to mandate data sharing as a general rule. The new requirements will need to take into account the existing body of consumer rights and are likely to enhance organisations’ responsibility for keeping customer data secure.

In parallel, the EU will propose legislation in early 2021 that aims to drive a horizontal, risk-based and precautionary approach to the development and use of AI. While the specifics are still taking shape, the legislation will advance transparency, accountability and consumer protection. This is likely to be achieved by requiring organisations to adhere to robust AI governance and data-quality requirements.

If the digital trust felt by citizens has contributed to the success of many test and trace programmes, the upcoming legislation will likely help entrench this trend in the realm of AI. Notably, European legislation is likely to have implications across the world. Much like the GDPR’s effect, which saw other nations enact similar data-protection laws, new data and AI legislation may create a global ripple. As the UK develops its own strategies for data sharing and AI development, lawmakers will surely be keeping a close eye on Europe.

An active and inclusive culture of data sharing between governments, tech giants, start-ups and consumers is critical to creating tomorrow’s AI applications. Digital trust is the necessary foundation to this end. In their management of data and development of AI, organisations should strive to build confidence with consumers beyond merely complying with applicable standards.

Policymakers have the power and responsibility to facilitate this process. But the task is not easy. Regulatory intervention needs to be balanced so that it does not stifle AI innovation and adoption. At the same time, it must give clear, consistent and flexible guidance on how to develop and use trustworthy, safe and accountable AI. 

Balanced regulation may also provide market incentives. Indeed, the GDPR experience shows that organisations today compete on privacy protection. It is not only regulation, but also customer demand that is further driving the new privacy culture in the marketplace. The changing attitude of citizens and consumers towards privacy and other rights, is strengthened by a collective awakening to fundamental values and freedoms humanity should preserve amid rapid technological change. The pandemic and recent major societal movements related to human dignity, diversity and inclusion have accelerated the trend towards ethical practices also in technology use and development.

Legislators need to weave these elements into a legal framework for AI in order to help build digital trust in AI. Clear and flexible rules on how to operationalise fairness, accountability, transparency and explainability (FATE) principles with model and data governance will help create a culture of responsible AI. This will inspire trust so citizens can embrace AI, thus enabling businesses, governments and society as a whole to benefit from the transformative power of AI for good.

Kalliopi Spyridaki is chief privacy strategist, Europe and Asia Pacific, with SAS.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles