Have you ever wondered if a machine could anticipate your next move?
That’s exactly what Professor Minh Hoai Nguyen and his team have been researching in the Department of Computer Science (CS) at Stony Brook University. The remarkable nature of this project has earned it a funding award from the National Science Foundation worth nearly $1.2 million, which will allow researchers in computer science and psychology to join forces and move forward with practical studies.
As Professor Nguyen explains — using a method called inverse reinforcement learning to model human behavior and predict visual attention — the team hopes that they will be able to find patterns in human eye movement so that a computational system could anticipate responses to certain stimuli. The result will be a way for machines to learn where users will direct their attention.
About three years ago, Nguyen, an assistant professor with a PhD in Robotics, began discussing this idea with co-investigators Dimitris Samaras (CS) and Gregory Zelinsky (Psychology). Samaras and Zelinsky have been studying human gaze attention for decades, but as a unit, the team will work on creating a model to predict this behavior.
Nguyen’s team predicts that this research will change the way visual imagery is presented, and one example is in the use of streaming services. According to a study by Pew Research Center, 61 percent of Americans between the age of 18 and 29 use streaming services as their primary source of television. In densely-populated areas, this prominence of streaming can cause slow buffering speeds or dropped connections. The model being developed by the team at Stony Brook would allow streaming platforms to focus resolution on areas of a screen that users are most likely to be viewing. The use of bandwidths will therefore be optimized without any perceived loss of quality.
In a teaching environment, this research could also be used for educators to enhance the experience of their students. The proliferation of technology has allowed presenters to innovate new ways to visually stimulate their audience. Information from models created by Nguyen’s team would allow these educators to detect points of high or low attention in presentations, which could be used to adjust and improve complex lessons.
“The impact of this award will trickle down to a variety of end users, from the person watching a movie on their smartphone to the technology company designing smart homes,” said Samir Das, professor and chair of the Department of Computer Science. “This is a great example of the collaborative work that is quietly taking place in the department and has the potential to affect millions.”
Nguyen and his team hope that this software could also allow for improved safety and quality of life for those with impaired mobility, such as the elderly or people with physical challenges. The applicable use here would be that smart homes utilize the models that these researchers are developing to employ attention-oriented assistive technologies.
This research is projected to last until May 2022, when the team hopes to have successfully understood how people allocate their attention while viewing images and videos, and have used that understanding to predict human attention.
If you are a graduate student who would like to get involved in this study, please contact Professor Minh Hoai Nguyen at email@example.com.
About the Researchers
Minh Hoai Nguyen received a PhD in Robotics from Carnegie Mellon University and a BE from the University of New South Wales. Before coming to Stony Brook, Minh Hoai was a post-doctoral research fellow with Andrew Zisserman at Oxford University. He was also a Kurti Junior Research Fellow at Brasenose College. Minh Hoai Nguyen’s research interests are in computer vision, machine learning and time series analysis. His recent research focuses on creating algorithms that recognize human actions, gestures and expressions in video.
Dimitris Samaras earned his PhD in Computer Science from University of Pennsylvania; MS in Computer Science from Northeastern University and Diploma in Computer Engineering and Informatics from University of Patras, Greece. Samaras’ research until now has focused on explaining visual data for computer vision, computer graphics and image analysis through the appropriate physical and statistical models. A central interest is in modeling the interaction of 3D shape and illumination (a major source of variability in images) for applications such as shape and motion estimation, object recognition and augmented reality.
Gregory Zelinsky is affiliated with the Department of Computer Science and he is a professor of cognitive science in the Department of Psychology at Stony Brook University. Zelinsky, who earned a PhD from Brown University, integrates cognitive, computational and neuroimaging techniques to better understand a broad range of visual cognitive behaviors, including search, object representation, working memory and scene perception.
— Author Duffy Zimmerman; Photographer Gary Ghayrat