FGPat18: Feynman graph pattern-based language detection model using EEG signals
- Publisher:
- Elsevier
- Publication Type:
- Journal Article
- Citation:
- Biomedical Signal Processing and Control, 2023, 85, pp. 104927
- Issue Date:
- 2023-08-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S1746809423003609-main.pdf | Published version | 3.58 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
We aimed to develop an efficient handcrafted feature engineering model based on four directed graphs modeled on Feynman graph patterns (FGPat) for electroencephalography (EEG)-based language identification. We prospectively acquired a 3252-EEG dataset from 20 native English-speaking Nigerian-born and 20 Turkish subjects who were shown 20 standardized sentences in the English and Turkish languages, respectively. 14-channel 15-second EEG signals (sampling frequency 128 Hz) were acquired using the EMOTIV EPOC+ mobile brain cap system. In our FGPat18 model, input EEG signals and their 17 tunable Q wavelet transform-decomposed wavelet bands were fed as input to four FGPat-based feature extraction functions and statistical feature generators to extract textural and statistical features, respectively. Then they were concatenated to obtain four final feature vectors of varying lengths. The latter was input to the neighborhood component analysis function to select the most discriminative/meaningful 256 vectors in each vector, which were then fed to the k-nearest neighbor (kNN) classifier for binary classification. Next, iterative majority voting (IMV) was applied to the four kNN-predicted vectors to generate two voted vectors; the most accurate among the six pooled vectors was then selected as the best channel-wise result. Finally, all 14 channel-wise best vectors were input to the IMV algorithm again to calculate another 12 voted vectors; the best overall result for the EEG study was chosen among the 26 vectors. FGPat18 attained 99.38% and 92.47% classification accuracy rates with 10-fold and leave-one-subject-out cross-validations, respectively. The model has linear complexity.
Please use this identifier to cite or link to this item: