Why digital identity will make or break the Metaverse
Image credit: Framestock Footages/Dreamstime
In virtual worlds where appearance is customisable, the potential for fraud is greater than ever.
Many brands, including Coca-Cola, Nike and Disney, are creating experiences in the Metaverse, yet the virtual space remains a rather nebulous concept. It’s far from fully understood, even to the most eager stakeholders, and there’s a long way to go before the potential applications of virtual worlds come into focus.
Caitlyn Ryan, EMEA VP of Meta’s Creative Shop, has defined the Metaverse as “a set of virtual spaces where you can create and explore with people who aren’t in the same physical space as you.” In summary, it’s an immersive platform where people can collaborate, socialise and become part of a shared experience. With reports suggesting that the total value of the Metaverse is forecast to hit $36 bn by 2025, it’s no surprise that business leaders are keen to understand its true capacity.
Amid the hype and potential, there’s a lot of work to be done to establish the fundamentals that will make the Metaverse a safe experience for all. In particular, access must be underpinned by verified identities. Without verified identities in the Metaverse, anonymous bots can wreak havoc. For businesses, business people and celebrities, impersonators can cause damage to their brand and reputation, not to mention the new opportunities it creates for scammers and fraudsters. Ensuring trusted identity will therefore be fundamental to the safety and success of the Metaverse.
Unlike the real world, appearances in the Metaverse can be deceiving. People will be able to create digital avatars to represent themselves. This makes tying such an avatar to a legitimate human identity even more important. In virtual worlds where appearance is customisable, the potential for fraud is greater than ever.
This may not seem to carry an immediate threat, but for large organisations it opens the floodgates to nefarious actors. Scanning faces or photos to create avatars of a real-life likeness, without first verifying that the likeness and the person creating it match, could present opportunities for fraud and mistrust of the system.
For instance, as trading digital assets such as cryptocurrencies and NFTs becomes more popular and potentially lucrative, the Metaverse could become a clear target for fraudsters and cybercriminals.
Regardless of what an individual's avatar may look like, it’s crucial that others can trust they are who they say they are. In most cases, this means highlighting in the Metaverse when a user identity has been matched with and verified to a real human. Providing the opportunity for such verification will help to enhance the level of trust and confidence in the system.
However, this doesn’t mean automatically exposing users’ real identities without giving them the option of anonymity. After all, keeping identities hidden online and in digital forums can be vital to protect minorities and other vulnerable users, and to fight repression and corruption. Metaverse users should be able to choose whether to visibly verify their real identities - if they don’t, fellow inhabitants can opt not to engage with them.
Identify verification will be crucial to enabling many of the use cases for virtual worlds. From a corporate perspective, the Metaverse has the power to revolutionise hybrid work by bringing together the benefits of in-person and remote communication, eradicating the issue of finite office resources and allowing each avatar access to tools that optimise what they are saying or presenting. But organisations must carefully consider the implications for access management before taking meetings, collaboration and data sharing into this virtual arena. If avatars are fully customisable, how can you be sure that the right person has just entered your meeting? With sensitive information potentially at stake, employers need to ensure the legitimacy of those behind the avatars.
Age verification will be important too. The Metaverse will provide a new dimension to chat rooms and other age-related platforms. Avatars’ appearances may change regularly and there will undoubtedly be illegal attempts to access restricted content - such as new gambling experiences or virtual cinemas.
The infrastructure of the Metaverse will likely also be used to support the transaction of goods and services using NFTs and other forms of digital currency, making it crucial to understand who you are exchanging payment with.
These use cases highlight why, in digital environments like the Metaverse, online identity verification needs to be swift and robust, with the appropriate level of privacy and security. Whether this means helping users see which avatars have a ‘verified’ real identity connected to them, or something else, it will be important to mitigate the early reports of nefarious activity on the platform. A person’s real identity doesn’t necessarily need to be on display, but it’s important to at least show that an avatar is backed up by a real person to protect users and allow them to make informed choices about who to engage with, while ensuring accountability and driving trust.
Ultimately, the Metaverse suffers from many issues that the broader online world does. In a rush to capitalise on this innovative new concept, both those constructing the Metaverse and the organisations looking to take advantage can’t overlook the basics of ensuring it has safety and security at its core.
Matt Peake is global director of public policy at Onfido.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.