//

Computer scientists develop program that mimics human eye movements to train the metaverse

The program accurately imitates the way humans view the world, allowing for the training of virtual and augmented reality software.

161 terbaca

Computer engineers have developed a program for the metaverse that can accurately imitate the way humans view the world, allowing for the training of virtual and augmented reality software. 

The program, named EyeSyn, helps programmers build software for the fast-evolving metaverse, while also safeguarding user data.

It follows the path of recent advances in eye-tracking technology that birthed innovative visual applications that can estimate cognitive load and recognise human emotion. Many AR and VR companies are now incorporating eye-tracking capabilities into their programs. 

Developed by computer engineers at Duke University and TU Delft, the virtual platform imitates the way in which human eyes track stimuli. These stimuli can range from people viewing art in galleries to engaging in conversations or buying things online.

The EyeSyn program allows developers to utilise this data to train new metaverse platforms/applications and games.

Simulated virtual eyes can mimic human eye movement

Eyes are often described metaphorically as the windows to the soul, and for good reason. The way the human eyes move and the pupils dilate offers a remarkable volume of information. Eye movement can disclose whether a person is excited or bored, how experienced they are at performing a task, where their attention is focused, or if they are fluent in a particular language.

If you’re interested in detecting whether a person is reading a comic book or advanced literature by looking at their eyes alone, you can do that.

Maria Gorlatova, Assistant Professor of Electrical and Computer Engineering at Duke University

“If you’re interested in detecting whether a person is reading a comic book or advanced literature by looking at their eyes alone, you can do that,” said Maria Gorlatova, an Assistant Professor of Electrical and Computer Engineering at Duke and one of the developers of EyeSyn.

Gorlatova explained that one can tell a lot about a person by knowing where they are focusing their vision the most. She said, “It can inadvertently reveal sexual and racial biases, interests that we don’t want others to know about, and information that we may not even know about ourselves.”

[Eye movements] can inadvertently reveal sexual and racial biases, interests that we don’t want others to know about, and information that we may not even know about ourselves.

Maria Gorlatova, Assistant Professor of Electrical and Computer Engineering at Duke University

Eye movement data are extremely vital to organisations developing software and applications for the metaverse. By studying a person’s eyelets, for instance, developers customise content to suit the way users respond to engagement or to lower resolution in their peripheral vision to conserve computational energy. Virtual eye contact can also impact our nervous systems in the same way as physical eye contact, helping people feel connected.

“But training that kind of algorithm requires data from hundreds of people wearing headsets for hours at a time,” Gorlatova explained. “We wanted to develop software that not only reduces the privacy concerns that come with gathering this sort of data, but also allows smaller companies who don’t have those levels of resources to get into the metaverse game.”

Developing EyeSyn using eye movement data

To develop the EyeSyn program, the Duke engineers explored the ways that individuals view the world and process visual information using cognitive science literature.

Gorlatova’s team included a Duke PhD student, Tim Scargill, and former postdoctoral associate Guohao Lan, now an assistant professor at the Delft University of Technology in the Netherlands. 

As complex as this task was, the researchers needed to create virtual eyes that imitate the way humans respond to a wide range of stimuli. For instance, when someone watches another person speak, their eyes oscillate between the speaker’s eyes, mouth and nose for various amounts of time. 

When building EyeSyn, instead of utilising human eyes for this research, the team fed into the system models for patterns in eye movement for activities like reading, engaging in conversation, and watching videos. These models were then used to create a virtual dataset for the program. That way, EyeSyn learned to observe and copy those patterns, and then use the data.

The engineers said this process eliminates some of the privacy concerns tied to collecting large volumes of biometric data required to train algorithms.

“If you give EyeSyn a lot of different inputs and run it enough times, you’ll create a data set of synthetic eye movements that is large enough to train a (machine learning) classifier for a new program,” Gorlatova said.

Privacy intrusion remains minimal because the program depends on templates and not large, cloud-based datasets containing human eye movements. This way, the platform also utilises fewer resources, making it easier for smaller developers to tap into those resources and execute virtual environments with smaller amounts of computing power.

How accurate are the virtual eyes?

To test how accurate the ā€œeyesā€ were, the researchers used publicly available data. They had the synthetic eyes view videos where Dr Anthony Fauci spoke at press conferences, then they compared this information to data from the eye movements of human viewers.

In addition, the engineers compared a repository of the synthetic eyes observing art to the data obtained from those who browsed an art museum online.

Their findings revealed that EyeSyn could mimic the clear-cut patterns of human gaze signals and imitate the various ways people’s eyes react to different stimuli.

The program’s performance was good enough to be used by developers as a criterion for training new metaverse software, applications, and platforms. By prioritising its algorithms after interacting with a set of users, commercial software can attain even better results. 

“The synthetic data alone isn’t perfect, but it’s a good starting point,” Gorlatova said. “Smaller companies can use it rather than spending the time and money of trying to build their own real-world datasets (with human subjects). And because the personalisation of the algorithms can be done on local systems, people don’t have to worry about their private eye movement data becoming part of a large database.”

The study findings were presented earlier in the year at the International Conference on Information Processing in Sensor Networks (IPSN), an annual forum on research in networked sensing and control.