STONY BROOK, NY, April 27, 2021 – A study that tested neural activity in the brains of individuals with Autism Spectrum Disorder (ASD) reveals that they successfully encode facial emotions in their neural signals – and they do so about as well as those without ASD. Led by researchers at Stony Brook University, the research suggests that the difficulties ASD individuals have reading facial emotions arise from problems in translating facial emotion information they have successfully encoded, not because their brains fail to do so in the first place. The findings are published early online in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.
According to Matthew D. Lerner, PhD, Senior Author and Associate Professor of Psychology Psychiatry & Pediatrics in the Department of Psychology at Stony Brook University, this electroencephalogram (EEG) imaging study allowed the researchers to test a fundamental question about autism that has not yet been clearly addressed: Are the challenges in emotion recognition due to the emotional information not being encoded in the brain in the first place, or are they accurately encoded and just not deployed?
“Our findings indicate the latter part of that question appears to the be the more likely explanation for why many autistic individuals struggle to read facial emotions,” explains Dr. Lerner. “Particularly now, when mask-wearing is pervasive and everyone has less facial emotion information available to them in daily life, it is especially important to understand how, when, and for whom struggles in reading these emotions emerge – and also when we may be misunderstanding the nature of these struggles.”
The study involved a total of 192 individuals of different ages nationwide whose neural signals were recorded when viewing many facial emotions. The team used a discriminative and contemporary machine learning approach called Deep Convolutional Neural Networks to classify facial emotions. The machine learning approach included an algorithm that enabled the researchers to examine the EEG activity of individuals with and without ASD while they were watching faces and decoding what emotions they saw. The algorithm in turn could indicate for each individual face what emotion the person was viewing – essentially, to try to map the neural patterns that the brains of participants were using to decode emotions.
According to the authors, the findings have major implications on how individuals with ASD process emotions and for developing new types of interventions to help improve ASD individuals’ facial emotion assessments of other people.
“Specifically, many interventions try to help people with ASD to compensate for not understanding emotions – essentially, they are emotion recognition prosthetics. However, our findings suggest that these approaches may not be helpful, and rather we should focus on capitalizing on and reinforcing their intact encoding of emotions,” adds Dr. Lerner.
The research, in collaboration with the University of Trento, involved imaging and data collection made possible by the Institute for Advanced Computational Science at Stony Brook University and use of the SeaWulf computing system.
The work was supported in part by a grant from the National Institute of Health’s National Institute of Mental Health (NIMH) (#R01MH110585) and from the National Science Foundation (#1531492). Additional support and funding included grants from the American Psychological Association, the American Psychological Foundation, the Jefferson Scholars Foundation, the Alan Alda Fund for Communication, and the Association for Psychological Science.