Loading Events

« All Events

  • This event has passed.

Ph.D. Defense: Badar Almarri

May 17, 2021 @ 11:00 am - 12:30 pm EDT

Title: A BCI Framework for Affection Recognition: Channel and Feature Selection, and Subjective Label Dichotomization

Ph.D. Candidate: Badar Almarri

Major Advisor:  Dr. Chun-Hsi Huang

Associate Advisors:  Dr. Sanguthevar Rajasekaran and Dr. Sheida Nabavi

Date/Time: Monday, May 17th, 2021, 11:00 AM 

Meeting link: https://uconn-cmr.webex.com/uconn-cmr/j.php?MTID=m909538af668ddf836b7b7170c9ddaa56

Meeting number: 120 904 1179

Password: 3TPfnMMTU55

Abstract:

Affective computing has become a vital component in the evolvement of artificial intelligence humanization.   Compared to various sources of reading human emotions, brain signals are considered more objective and accurate in brain-computer interaction.  Brain imaging and recording solutions such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are feasible given the massive number of neurons (i.e., quantified in tens of billions) and the rapid and unanticipated interactivity among them. Yet, neurophysiological data come with multifaceted and complex challenges that limit their replicability and predictability.  In particular, the dimensionality of the spatially distributed channels in EEG-based brain-computer interface (BCI) undermines the prediction power of human affections.  EEG is known for its decent temporal resolution; however, neural interactivities are subjective, and their synchronization patterns vary spontaneously. Thus, prior to modeling such data, as the final stage of the learning pipeline, adequate preprocessed signals’ features of the stimuli-related electrode channels are essential to predicting underlying human emotions.

This dissertation investigates and proposes a framework that tackles two broad problems in EEG-related affective computing studies by leveraging subjective particularities in a highly variant inter-subject environment.  First, we enhance the learning pipeline in subject-independent emotion recognition by selecting relevant spatiotemporal features.  We present a subject-specific unsupervised learning algorithm that selects the most stimulus-subject-relevant EEG features and channels based on connectivity analysis of the brain’s regions. Also, we embed unsupervised algorithms for feature extraction and selection in the time and frequency domains. The main components of this algorithmic framework are unsupervised and based solely on neural data. Therefore, automatic emotion analysis and recognition are possible in both experimental trials and real-life applications. This framework outperforms other methods used in the EEG-based subject-independent emotion recognition studies using real-world data. Second, we present solutions for subjective labeling and label imbalance in such experiments.  In BCI applications, labels are expected to represent ground truth to some extent – their existence is a vital component of supervised learning problems. Some issues are class imbalance (i.e., skewed label distribution) and unreliability due to the uncertainty of subjects’ underlying emotional states.  Dichotomizing a continuous label scale is a common practice in BCI for classification purposes. Dichotomization is typically decided statistically or based on a subject matter expert. However, the subjectivity of participants and its impact is neglected. To improve the prediction pipeline, we investigate the effect of thresholding on EEG emotional self-assessment to alleviate the impact of inter-subject variation and label imbalance, and, therefore, improve model outcomes. The proposed subjective dichotomy method improves the prediction accuracy by learning subjective dichotomies as compared to the conventional methods.

Details

Date:
May 17, 2021
Time:
11:00 am - 12:30 pm EDT

Connect With Us