Security in Metaverse: Will We Ever Be Safe Enough?

Without undertaking new security measures to protect users in the metaverse, privacy might not be possible. This is the result of a recent study conducted by The University of California, Berkeley. What is surprising besides the previously stated conclusion is the fact that for the identification of users in this space, very little information is needed. This can potentially eliminate the chances of much-anticipated anonymity in virtual and augmented reality.

Reading Time: 6 minutes

security in metaverse

Illustration: Milica Mijajlovic

The popularity of the metaverse, i.e., the virtual world, is ever-increasing. It is becoming the next must-have concept that companies are considering improving. But as it’s growing, so do security threats. Though the metaverse isn’t here right now, businesses can nevertheless think about the security challenges it poses. Hence, being aware of them is a vital step towards securing and protecting our most vulnerable information.    

The aforementioned research was conducted by graduate researcher Vivek Nair at the Center for Responsible Decentralized Intelligence (RDI) at the University of Berkeley. In their study, researchers used the largest dataset of user interactions in virtual reality (VR) ever examined for privacy hazards. The study results are alarming. 

VR and AR security challenges 

Being part of the metaverse, both VR and AR raise multiple security and privacy issues. As is the case with the metaverse, VR environments have no regulations, at least not for the time being. Taking into account the intrusive data collection and the fact that massive amounts of data are constantly shared with anonymous users, regulations are bound to come at some point. Presently, though, data protection is at the discretion of the platform owner. 

Being aware of who you interact with in the metaverse is tough. There is no evidence you’re engaging with people who indeed are what they claim to be. There is simply no way for the property owner to qualify the credentials of users. Likewise, it’s impossible to decide who is allowed or denied entry into the property. It’s also hard to monitor what’s happening inside the properties, making them more than adequate for illegal financial transactions. 

Given that you enter virtual reality via a headset, the user’s avatar could be completely taken over if the headset endpoint is compromised. Avatars can change their appearances, making meetings, private conversations, and other interactions vulnerable to eavesdropping and interference.  

image-1

Protecting your privacy in the Web 2.0 metaverse is extremely difficult. The reason behind this is rather transparent. If you’d like to access the metaverse, you need to provide the information required by the platform the metaverse belongs to. If you refuse to provide them or disable data collection, you won’t be able to access the metaverse.

Professor J. Keeting, Ape School

In terms of AR, as it includes overlaying data from third parties, any vulnerability in the integrity of the data could be a significant challenge. For instance, the user can receive inaccurate directions if a location app that has been overlaid onto a headset uses inaccurate location data. 

In line with this, physical security is a concern because users often roam around in the actual world while wearing an AR overlay. Users risk hurting themselves or those around them if they become overly engrossed in the virtual worlds. 

Simple motion data is not so simple  

Many researchers that study security in the metaverse typically focus on VR headsets, specifically cameras and microphones. They record detailed information about users’ facial features, eye motions, and vocal qualities, together with ambient info about the users’ location. 

There are researchers who are concerned about novel technologies like EEG sensors. Allegedly, they are able to detect brain activity through the user’s scalp. The problem here is that, though such data entail considerable privacy risk, switching them off may not guarantee anonymity. This is because so-called simple motion data – the fundamental data stream necessary to interact with VR – could be all that is needed to identify a user.  

The notion of simple motion data involves three data points tracked by VR systems. One point is on the user’s head and two are on the hands. Researchers frequently call this “telemetry data.” It stands for the minimal dataset necessary to enable natural interaction in VR. 

Instant identification 

The new Berkeley research titled Unique Identification of 50,000-plus Virtual Reality Users from Head and Hand Motion Data studied over fully anonymized 2.5 million VR data recordings. The recordings were obtained from over 50,000 players on the Beat Saber app. The study revealed that with just 100 seconds of motion data, individual users may be identified with more than 94% accuracy.  

What’s even more astounding was the fact that with only 2 seconds of motion data, it was possible to uniquely identify 50% of all users. Although innovative AI methods were necessary to achieve this level of precision, the data was still incredibly limited, with only three spatial points for each user monitored throughout time. 

To put it simply, whenever a user puts on their headset, takes hand controllers, and immerses himself in a mixed reality environment, they leave a stream of digital fingerprints. It’s precisely those fingerprints that can be used to identify VR or AR users. This can be compared to actual fingerprints, and it’s astonishing how accurately humans can be identified by using a single fingerprint.  

Vulnerability or irrational fear
Fear is fairly positive and one should have it, as it urges us to be cautious and innovative. In my opinion, however, the fear communicated today is exaggerated. I do think that we live in a pretty transparent society, not to say that there is no privacy anymore. Just think about smartphones, Alexa, smart TVs, smartwatches with GPS, etc. All of them are tools that fully enable user tracking, so a metaverse is not going to change the situation we’re already in – claims Professor J. Keeting.

Taking anonymity away   

What the Berkeley research suggests is, whenever a user immerses in VR and swings a virtual saber, they leave some motion data. That data might be even more uniquely identifiable compared to humans’ actual fingerprints.  

Consequently, this entails a grievous privacy risk, as it may erase anonymity in the metaverse. Additionally, this motion data can be employed to precisely infer multiple features of a user. Those may include gender, height, and handedness. When mixed with other data tracked in mixed reality, the motion-based fingerprint approach may produce even more definite identifications. 

To get an illustration of how dangerous motion-based fingerprints can be, think about the metaverse in the near future. Suppose that you frequently go shopping in VR and AR environments. No matter if you’re browsing products or just visualizing what specific pieces of furniture might look like in your place, you may be performing specific physical motions. Those might involve taking objects from shelves or stepping back to get a better look at furniture. 

According to the Berkeley study, these regular movements may be as distinctive to each of us as our fingerprints. If so, these so-called “motion prints” would imply that casual customers couldn’t visit the virtual store without being easily identified. 

Can the privacy issue be resolved? 

One possible approach to solve the privacy issue is to obfuscate the motion data before reaching external servers. However, this would imply a noise introduction. Though this might safeguard users’ privacy, it could reduce the accuracy of physical motions. As a consequence, it compromises user performance in applications that require physical skills. This may ultimately result in users’ refusing to be part of the mixed reality.  

Another possible approach is to pass regulation that would forbid continuously taking and analyzing motion data from users. Although it would be challenging to enforce and would encounter resistance from the industry, such regulation would help protect the public.  

Professor J. Keeting at Ape School also agrees. “I believe it’s necessary to define new regulations that will make sure each one of us can influence whether personal information is collected and how it is processed. In terms of technology, it’s not a big deal. Personal data collection is not decisive for a vicarious immersive experience,” Professor J. Keeting concludes. 

Berkeley researchers are investigating advanced defensive strategies hoping they will obfuscate the distinctive features of gestures without compromising dexterity in mixed reality. Protecting privacy is critical for both the entire industry and the users. If users feel exposed or unsafe in the metaverse, they may be hesitant to join mixed reality environments. 

What is the metaverse for those who skipped lessons 

Professor J. Keeting, Ape School, explains that we should differentiate between two types of Metaverse based on their character. ”In essence, there are two types of the metaverse: the one implemented according to Web 2.0 principles such as Horizon Worlds, and the metaverse implemented based on Web 3.0 like Decentraland (MANA).” 

So, a virtual world where people may connect and interact is known as the metaverse. The Greek words meta, which means beyond or after, and verse, which is short for the universe, signify this fusion of the physical and digital worlds. The metaverse is mostly divided into two types, virtual reality (VR) and augmented reality (AR).  

Virtual reality offers an artificial reality experience through the use of a VR headset which fills the user’s field of vision. Other types of immersive experiences involve audio and positional tracking of a user’s body. This allows interaction with the virtual environment using an individual’s hands and other body parts.  

On the other hand, augmented reality (AR) is not as immersive as VR. It implements a sort of lens to build virtual overlays over the actual scene, but users continue to see their environment normally. Examples of augmented reality include wearables such as Microsoft’s HoloLens or the Waze app smartphone. A user’s location and intents can be ascertained by the host. 

It’s worth noting that, generally, users should not expect privacy rights in VR experiences. In contrast, AR has a foothold in the real world. Therefore, privacy rights are well supported.  

Typical metaverse security challenges  

According to Professor J. Keeting, the metaverse built on Web 2.0 principles poses a threat to privacy. Some of the most frequent security challenges include:  

  • Moderation,  
  • Client vulnerabilities,  
  • Data accuracy, 
  • Identity, 
  • User-to-user communication 
  • Privacy. 

Moderation challenges imply that the metaverse users are left without help or support in the majority of metaverses. Whatever happens, they may not be able to count on regulations or backup from third parties. Thus, cases such as NFT theft can leave users unsupported.   

In the case of client vulnerabilities, VR and AR headsets are powerful machines, comprising a lot of software and memory. As such, they are ideal targets for both accidental and malicious hacks. In addition, device manipulation, along with location spoofing, allows wrongdoers to take over a client’s identity, causing chaos after entering the metaverse.  

image-1

Each metaverse built on Web 2.0 principles poses a threat to privacy. Depending on the technology it implements, the metaverse collects a massive amount of data about the user, then merges it with the existing one. Facebook is a good example. If you have a FB account and start using Horizon Worlds, Facebook will “enrich” your account with a bunch of data collected in the metaverse. So generally speaking, the metaverse is a threat to privacy, particularly the web 2.0 variant.

Professor J. Keeting, Ape School 

Data such as location, user information, reviews, and third-party information are rooted in accuracy. The issue here is that ensuring accuracy in the metaverse can be rather challenging. Special attention should be paid to users’ identities as they are in jeopardy. Clients may have their accounts hacked or identities spoofed, or their avatars can be confiscated. Thus, it can be forever questionable who you are actually dealing with in the metaverse.  

The immersive experience of the metaverse is all about making user-to-user communication easy and efficient. Such relationships heavily depend on trust and are mainly developed through interaction. Only a single malicious actor may cause irreparable damage. Hence, the need for moderation is vital and has to be taken seriously. 

As previously mentioned, there are no laws governing the metaverse. The data collection necessary for a genuinely customized immersive experience requires privacy invasion. However, users frequently are unaware of the amount of data they are providing. Additionally, virtual experiences are borderless in contrast to physical ones, so privacy protection is at the mercy of the platform and property owners. Regulations like GDPR and others have no sovereignty in the metaverse. 

"Ever tried. Ever failed. Never mind. Try again. Fail better."

[the_ad_placement id="end-body"]

Subscribe to our newsletter and stay updated !