Book interview: ‘Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse’
Image credit: Dreamstime
According to Elizabeth M Renieris, the way we’ve addressed data privacy over the past 50 years has put fundamental human rights in jeopardy. It needs to change.
“My book isn’t about data, not really,” says Elizabeth M Renieris, author of ‘Beyond Data’. “It’s about how our historical obsession with data in the context of technology governance has been problematic, and why we need a materially different approach: one less focused on data and more focused on people.” Subtitled ‘Reclaiming Human Rights at the Dawn of the Metaverse’, her book’s central argument is encapsulated in the opening sentence of the first chapter: “For more than fifty years, we have been so busy protecting data that we have largely forgotten to protect people.”
Senior research associate at the University of Oxford’s Institute for Ethics in Artificial Intelligence, Renieris’ position on the current state of data legislation is that, “so long as we continue to focus on data as a pillar of technology governance, our laws will continue to fail us. We aren’t in the 1970s anymore, and we don’t live in a world of neat binaries and clearly delineated databases.” If this leaves you in any doubt about where she stands on the subject, she elaborates by stating that “despite claiming to be human-centric and technology neutral, our existing approaches are neither.”
‘Beyond Data’ is a powerful analysis that starts by placing today’s right to data protection in its historical context. It may come as a surprise to learn that there was a body of such rights long before digital tools, technologies and online spaces ever existed. But it’s a real eye-opener to discover that these laws were “first and foremost concerned with people”. Then it all changed in the 1970s, largely in response to the introduction of home computing and computerised databases, when the concept of human rights in the digital domain became “significantly” narrower.
This emphasis on data over people has “effectively undermined a once historically powerful notion of privacy, resulting in widespread human rights harms and abuses”, says Renieris. “Worse yet, this obsession with data, including through decades of flawed, and equally technocratic policies, has induced a kind of ‘data blindness’, causing us to lose sight of a vast array of human rights and freedoms that are implicated by digital technologies, especially as the technological landscape undergoes rapid evolution and change.”
The book finishes with the suggestion that “we need to replace this data-based approach to technology governance – and its near-exclusive emphasis on privacy and free expression – with one based on a wider array of human rights and fundamental freedoms”. The author takes us through examples of how this broader framework would apply to the governance of technologies such as AI, ‘metaversal technologies’ (augmented, extended and virtual reality) and neurotechnologies.
Renieris, who is also founder and CEO of the technology law and policy consultancy Hackylawyer, says that her motivation for writing ‘Beyond Data’ was both personal and professional. On the personal side of the ledger, she describes an experience concerning her former university classmate Mark Zuckerberg and the origins of Facebook, which began “as an exercise in pitting undergraduate women against each other to rate their attractiveness”. She details the “indignation and humiliation I felt at the time, and the impression it left on me as I would eventually go on to train and practice as a data-protection and privacy lawyer”.
Meanwhile, in terms of her professional interest in the subject, “I grew increasingly frustrated with the limitations and shortcomings of data protection and privacy law in addressing the kind of dignity-based violation I perceived at the hands of my classmate, as well as the increasingly complex challenges associated with new technologies over time”.
Often feeling that we have been taken down the path of asking the wrong questions about data protection – that we were asking it “to do too much, or else not enough” – Renieris concluded that the spectrum ranged from a broad-sweeping but ineffective tool for technology governance to a corporate technical exercise in ensuring the security and confidentiality of data. Meanwhile, her academic investigations had exposed that most of the research, scholarship and public discourse on the topic of technology governance and human rights only focused on two individual aspects: privacy and free expression. “This is unsurprising when you fixate on data or information,” says Renieris. But they also represent only a small proportion of over 30 fundamental human rights and freedoms and are “insufficient to address the wide array of risks posed by technologies like AI, machine learning, or extended-reality technologies such as augmented and virtual reality”.
In choosing to title her book ‘Beyond Data’, Renieris is drawing attention to what she perceives to be the need to “move beyond our obsession with data. You could say that we live in an age of data, although I’d argue that we live in an age of data blindness, where we cannot see beyond data to identify what’s really at stake with respect to new and advanced technologies.” Despite the fact that you could argue that data is ‘foundational’ to social media, healthcare and financial services, it does not follow necessarily that data protection is sufficient for its governance. “As I explore in my book, ‘tech’ neologisms like medtech, fintech, edtech, are powerful rhetorical devices designed to distract us from what these things really are, and to exempt them from what ought to be relevant laws or regulations.”
It’s not that technology in itself is a threat, says Renieris. But she points out, “history shows us how powerful technologies are typically wielded in service of powerful people and interests against less powerful people and interests”. By focusing on data, stripped of its social, economic and political context, the vulnerable are put at risk of further inequality. Conversely, “a broad human-rights-based approach has the potential to leverage technologies in a way that remedies, or at least accounts for, asymmetrical power”.
For Renieris, today’s focus on data is simply too narrow and “we are at heightened risk of losing sight of our humanity. Until we end the exceptional treatment of so-called ‘technology companies’ we will never have effective governance.” Since completing ‘Beyond Data’, Renieris has observed the widespread release of AI tools such as ChatGPT that use machine learning to generate content with natural language inputs. These have “unleashed a kind of mass experiment or beta testing on the public. At the same time, they reinforce my thesis that an approach to technology governance focused primarily on data is misguided, unsustainable, and dangerous.”
‘Beyond Data’ by Elizabeth M Renieris is published by the MIT Press, £21.99
In ‘Beyond Data’, Elizabeth M Renieris argues that since the 1970s our approach to technology governance has been dominated by laws and policies focused on data. This, she contends, has put individuals and communities at risk of harms, abuses and infringements of their civil and human rights. These include privacy breaches, discrimination and harassment, political persecution... “the list goes on and on”. Instead, she advocates for a new framework for technology governance based on a broader array of human rights and freedoms, well beyond what data protection and privacy can offer. Over the past half century, laws aimed at data protection have emphasised data over people, allowing companies to repackage the right to privacy in their own image. At the dawn of ‘metaversal technologies’, the time is right says Renieris, to replace this data-obsessed approach to technology governance.
Out of control
Although people are more aware of the potential risks and harms associated with new and emerging technologies, the chasm between their concerns and any realistic modicum of control continues to grow. Meanwhile, the law continues to embrace flawed notions of individual choice and control over their digital experience.
Europe’s new regulations purport to give people more control over their experience on digital platforms through a similar paradigm to traditional privacy and data protection laws by mandating additional transparency and requiring platforms to introduce new user controls and options.
The theory is that with enough transparency through robust notices and more detailed disclosures, individuals will be able to better adjust or calibrate their experience of algorithmically mediated processes, despite the fact that this theory or approach has not worked in practice regarding control over the uses or treatment of personal data, coupled with the exponentially increased complexity of AI and machine learning tools.
In the United States, where the new bipartisan draft federal privacy bill is being hailed as a great compromise, there are warning signs. Whereas industry has been uncharacteristically quiet on the bill, privacy professionals and advocates have been largely supportive of it, perhaps due to the sentiment that something is better than nothing.
While these efforts are laudable and represent an improvement on the status quo, they nevertheless proffer an approach that is unsustainable (and it is unlikely that we will see the kind of political will and cooperation necessary to make another material overhaul in the foreseeable future).
In fact, as one congresswoman describes it, the legislation is “a band-aid for the American people who are just fed up with the lack of privacy online.” But a band-aid can only ever stop the bleeding. And we desperately need something more enduring.
From ‘Beyond Data’ by Elizabeth M Renieris, reproduced with permission
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.