Hitmetrix - User behavior analytics & recording

Apple Vision Pro eye tracking flaw

Vision Flaw
Vision Flaw

A group of researchers has identified a security flaw in Apple’s Vision Pro mixed reality headset. The flaw makes it possible for hackers to reconstruct users’ passwords, PINs, and messages using eye movements. The vulnerability, called ‘GAZEploit,’ relies on eye-tracking data to decode what users type with the virtual keyboard.

The researchers discovered that since the avatars created by the headset are visible to other users, it’s possible to observe and analyze the eye movements of these avatars to predict the characters being typed. This method does not require hackers to gain direct access to the user’s headset. The researchers were able to predict keyboard placements with impressive accuracy.

They deduced the correct letters typed within a maximum of five guesses with over 90% accuracy in messages, 77% of the time for passwords, and 73% of the time for PINs. Apple addressed the vulnerability with a patch issued in July. The patch stops the avatar from being displayed while the virtual keyboard is in use.

This aims to prevent such exploitation of biometric data. The researchers highlighted the broader implications of this vulnerability. They noted that biometric data captured through wearable technology could inadvertently expose sensitive information.

Given the increasing amount of biometric information being recorded, the potential risk of such data falling into the wrong hands is a growing concern. Wearable tech has ushered in new privacy challenges.

Apple Vision Pro vulnerability exposed

It captures and stores detailed health data, locations, and other personal information that could be misused if not properly secured. This incident underlines the importance of scrutinizing how biometric data is handled and safeguarded as the use of wearable technology continues to proliferate. The GAZEploit attack consists of two parts.

First, the researchers created a way to identify when someone wearing the Vision Pro is typing by analyzing the 3D avatar they are sharing. For this, they trained a recurrent neural network, a type of model, with recordings of 30 people’s avatars while they completed various typing tasks. Second, the research uses geometric calculations to figure out the position and size of the virtual keyboard.

“The only requirement is that as long as we get enough gaze information that can accurately recover the keyboard, then all following keystrokes can be detected,” researcher Zihao Zhan explains. In their lab tests, the researchers did not have any knowledge of the victim’s typing habits, speed, or where the keyboard was placed. Nonetheless, they could predict the correct letters typed with 92.1 percent accuracy in messages, 77 percent for passwords, 73 percent for PINs, and 86.1 percent for emails, URLs, and webpages.

While the attack was created in lab settings and hasn’t been used against anyone in the real world, the researchers say there are ways hackers could theoretically abuse the data leakage. For example, a criminal could share a file with a victim during a Zoom call, resulting in them logging into a Google or Microsoft account. The attacker could then record the Persona while their target logs in and use the attack method to recover their password and access their account.

The research highlights how people’s personal data can be inadvertently leaked or exposed. These privacy and surveillance concerns are likely to become more pressing as wearable technology becomes smaller, cheaper, and able to capture more information about people.

Total
0
Shares
Related Posts
E-Book Popup

Unlock the Secrets of Digital Marketing in 2024!

Subscribe to our newsletter and get your FREE copy of “The Ultimate Guide to Digital Marketing Trends in 2024"