A team of researchers from Stony Brook University received a $1.2 million grant from the National Science Foundation to develop technology that can analyze patterns in human eye movement to help predict where people direct their attention.
Researchers from the department of computer science and the department of psychology have teamed up to work on this project.
“We are studying how humans look at images, how computers understand images and how it views human eye movement,” said Associate Professor of Computer Science Dimitris Samaras. “In the end, we hope to come up with models examining how human behavior works.”
The team uses inverse reinforcement learning — a method that uses rewards to predict human behavior — to help develop their model.
“The assumption is that we attend to places that we find rewarding. Everyone finds different things rewarding,” said Greg Zelinsky, a professor in the Department of Psychology. “With inverse reinforcement learning we are able to find visual features that people find rewarding.”
Zelinsky said that one of the A.I.’s applications could include observing the behavior associated with various addictions.
The technology they are working on has many potential uses. One idea posed by Zelinsky is to create intelligent houses. These homes could anticipate what homeowners’ goals are and even help them search for missing items around the house.
“Your house can anticipate what your 91-year-old mother wants to do, what she wants to find; you will be able to anticipate her attention and her goals and find her pill bottle and her T.V. remote,” Zelinsky said.
One part of the team’s model focuses on video content. The researchers believe the technology they create could be used to improve streaming services such as Netflix and Hulu. In highly-populated areas, video streaming can cause slow buffering speeds or dropped connections. Their model would allow streaming platforms to focus resolution on areas of a screen that users are most likely to be viewing. The use of bandwidth can therefore be optimized without any perceived loss of quality.
“Lots of mobile applications can use this technology. They can use it to grab user’s attention and even for advertising. It can result in being used to produce better content also,” Samaras said.
The team says that their new model can also be used for educational purposes. New technology has enabled educators to develop new ways to visually prompt their class’ attention. The team’s model can allow educators to detect points of high or low attention in their presentations, which could be used to adjust and improve complex lessons.
“If you are able to learn what it is that a person will find interesting, then you can test your educational materials to see whether it will attract the person’s attention in the way you intended to during a lecture or a presentation,” Zelinsky said.
This research is projected to last until May 2022, when the team hopes to successfully understand how people direct their attention while viewing images and videos, and use it to better understand how to predict human attention.
Professor Minh Hoai Nyugen, the lead researcher on this project, could not be reached for comment.
View original article on the Statesman.