ebook img

Signal Processing in Medicine and Biology: Innovations in Big Data Processing PDF

152 Pages·2023·9.198 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Signal Processing in Medicine and Biology: Innovations in Big Data Processing

Iyad Obeid Joseph Picone Ivan Selesnick   Editors Signal Processing in Medicine and Biology Innovations in Big Data Processing Signal Processing in Medicine and Biology Iyad Obeid • Joseph Picone • Ivan Selesnick Editors Signal Processing in Medicine and Biology Innovations in Big Data Processing Editors Iyad Obeid Joseph Picone ECE ECE Temple University Temple University Philadelphia, PA, USA Philadelphia, PA, USA Ivan Selesnick Tandon School of Engineering New York University Brooklyn, NY, USA ISBN 978-3-031-21235-2 ISBN 978-3-031-21236-9 (eBook) https://doi.org/10.1007/978-3-031-21236-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Preface This edited volume consists of the expanded versions of the exceptional papers presented at the 2021 EEE Signal Processing in Medicine and Biology Symposium (IEEE SPMB) held at Temple University in Philadelphia, Pennsylvania, USA. This was the second time the symposium was held as a virtual conference, which has been a popular format since it allows greater participation from the international community. We had 180 participants from 36 countries. IEEE SPMB promotes interdisciplinary papers across a wide range of topics including applications from many areas of the health sciences. The symposium was first held in 2011 at New York University Polytechnic (now known as NYU Tandon School of Engineering). Since 2014, it has been hosted by the Neural Engineering Data Consortium at Temple University as part of a broader mission to promote machine learning and big data applications in bioengineering. The symposium typi- cally consists of 18 highly competitive full paper submissions that include oral pre- sentations, and 12–18 single-page abstracts that are presented as posters. Two plenary lectures are included – one focused on research and the other focused on emerging technology. The symposium provides a stimulating environment where multidisciplinary research in the life sciences is presented. More information about the symposium can be found at www.ieeespmb.org. This edited volume contains five papers selected from the symposium by the technical committee. Authors were encouraged to expand their original submissions into book chapters. The papers represented in this volume all focus on signal pro- cessing applications in the health sciences. The first paper, titled “Hyper Enhanced Feature Learning System for Emotion Recognition,” focuses on the problem of automatically detecting the emotional state of a person based on five signals: elec- troencephalogram (EEG), galvanic skin response (GSR), respiration (RES), electro- myogram (EMG), and electrocardiograph (ECG). The authors explore techniques to automatically identify key features for characterizing the emotional state of the subject. The second paper, titled “Monitoring of Auditory Discrimination Therapy for Tinnitus Treatment Based on Event-Related (De-) Synchronization Maps,” deals with a well-known auditory condition known as tinnitus, and evaluates the effect of v vi Preface auditory discrimination therapy (ADT) by monitoring the level of neural synchroni- zation before and after the ADT-based treatment. Using event-related desynchroni- zation (ERD) and event-related synchronization (ERS) maps, the authors suggest that ADT reduces attention towards tinnitus if incremental alpha-ERS responses elicited after the ADT-based treatment during an auditory encoding task are found. The third paper, titled “Investigation of the Performance of fNIRS-based BCIs for Assistive Systems in the Presence of Physical Pain,” investigates the impact of the presence of acute pain conditions on the performance of fNIRS-based brain– computer interfaces (BCIs), exploring the use of this technology as an assistive device for patients with motor and communication disabilities. The authors found that the presence of acute pain negatively impacts the performance of the BCI. This study suggests that it is critical to consider the presence of pain when designing BCIs in assistive systems for patients. The fourth paper, titled “Spatial Distribution of Seismocardiographic Signal Clustering,” studies the use of seismocardiographic (SCG) signals to monitor car- diac activity. The authors study distance measures used to cluster these signals and suggest that Euclidean distances with flow-rate condition would be the method of choice if a single distance measure is used for all patients. The final paper, titled “Non-invasive ICP Monitoring by Auditory System Measurements,” explores monitoring of intracranial pressure (ICP) for diagnosing various neurological conditions. Elevated ICP can complicate the pre-existing clini- cal disorders and can result in headaches, nausea, vomiting, obtundation, seizures, and even death. This chapter focuses on non-invasive approaches to ICP monitoring linked to the auditory system. The limitation of these non-invasive techniques is their inability to provide absolute ICP values. Hence, these methods are unlikely to substitute for gold standard invasive procedures in the near term, but due to the reduced risks and ease of use, the methods might provide clinical utility in a broad range of inpatient, emergency department, and outpatient settings. We are indebted to all of our authors who contributed to making IEEE SPMB 2021 a great success. The authors represented in this volume worked very diligently to provide excellent expanded chapters of their conference papers, making this vol- ume a unique contribution. We are also indebted to the technical committee for volunteering to review submissions. IEEE SPMB is known for its constructive review process. Our technical committee works closely with authors to improve the quality of their submissions. Philadelphia, PA, USA Iyad Obeid Brooklyn, NY, USA Ivan Selesnick Philadelphia, PA, USA Joseph Picone December 2021 Contents Hyper-Enhanced Feature Learning System for Emotion Recognition . . . 1 Hayford Perry Fordson, Xiaofen Xing, Kailing Guo, Xiangmin Xu, Adam Anderson, and Eve DeRosa Monitoring of Auditory Discrimination Therapy for Tinnitus Treatment Based on Event-Related (De-) Synchronization Maps . . . . . . . 29 Ingrid G. Rodríguez-León, Luz María Alonso-Valerdi, Ricardo A. Salido- Ruiz, Israel Román-Godínez, David I. Ibarra-Zarate, and Sulema Torres-Ramos Investigation of the Performance of fNIRS- based BCIs for Assistive Systems in the Presence of Acute Pain . . . . . . . . . . . . . . . . . . . 61 Ashwini Subramanian, Foroogh Shamsi, and Laleh Najafizadeh Spatial Distribution of Seismocardiographic Signal Clustering . . . . . . . . . 87 Sherif Ahdy, Md Khurshidul Azad, Richard H. Sandler, Nirav Raval, and Hansen A. Mansy Non-invasive ICP Monitoring by Auditory System Measurements . . . . . . 121 R. Dhar, R. H. Sandler, K. Manwaring, J. L. Cosby, and H. A. Mansy Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 vii Hyper-Enhanced Feature Learning System for Emotion Recognition Hayford Perry Fordson, Xiaofen Xing, Kailing Guo, Xiangmin Xu, Adam Anderson, and Eve DeRosa 1 Introduction We all experience strong feelings necessary for any living being. These include fear, surprise, joy, happy, anger, and disgust (Ekman, 1992). Emotion recognition plays a vital role in human-computer interactions (HCI) and render capable computers to comprehend the emotional states of living humans beings in attempt to make com- puters more “compassionate” in the HCI (Corive et al., 2001; Song et al., 2020). Emotion recognition can be basically divided into two classes. The first class is based on physical signals such as facial expressions (Anderson & McOwan, 2006), body movement (Yan et al., 2014), and speech signals (Khalil et al., 2019). The second class is based on physiological signals such as electroencephalography (EEG) (Kong et al., 2021), electrocardiogram (ECG) (Hasnul et al., 2021), and elec- tromyogram (EMG) (Mithbavkar & Shah, 2021). Some studies in emotion analysis use unimodal signals for emotion recognition (Hajarolasvadi et al., 2020). Other studies focus on combining different physiological signals in order to model a mul- timodal paradigm (Abdullah et al., 2021). H. Perry Fordson (*) Centre for Human Body Data Science, South China University of Technology, Guangzhou, China Affect and Cognition Lab, Cornell University, Ithaca, NY, USA e-mail: [email protected] X. Xing · K. Guo · X. Xu Centre for Human Body Data Science, South China University of Technology, Guangzhou, China A. Anderson · E. DeRosa Affect and Cognition Lab, Cornell University, Ithaca, NY, USA © The Author(s), under exclusive license to Springer Nature 1 Switzerland AG 2023 I. Obeid et al. (eds.), Signal Processing in Medicine and Biology, https://doi.org/10.1007/978-3-031-21236-9_1 2 H. Perry Fordson et al. Structures of deep neural networks and learnings have been practically used in many fields and have achieved discovery successes in many applications such as artificial intelligence-based technologies (Buhrmester et al., 2021). Some of the most popular deep neural networks include deep belief networks (DBN) (Hinton et al., 2006; Hinton & Salakhutdinov, 2006), deep Boltzmann machines (DBM) (Taherkhani et al., 2018), artificial neural networks (ANN) (Yegnanarayana, 1994), and the convolutional neural networks (CNN). These deep structures have been very powerful but most of the time suffer from time-consuming training process due to numerous numbers of hyperparameters which make structures highly complicated. In addition, the complications involving the numerous hyperparameters make it hard to theoretically analyze the deep structures. Therefore, tuning parameters and adding more layers for better accuracy is what most works employ. This however involves more and more powerful computational resources to improve training per- formance such as gradable representation structures and ensemble learning struc- tures (Tang et al., 2016; Chen et al., 2012, 2015; Gong et al., 2015; Feng & Chen, 2018; Yu et al., 2015, 2016a, b). Feature extraction for emotion recognition is a difficult task (Zhao & Chen, 2021). Reliable features are needed to classify emotions correctly. Because of the universal approximation capabilities of single-layer feedforward neural networks (SLFNN), they have been largely applied to solve classification problems (Leshno et al., 1993). However, they usually suffer from long training time and low conver- gence rate as they are sensitive to hyperparameter settings like learning rate. Random vector functional-link neural network (RVFLNN) (Pao & Takefuji, 1992; Pao et al., 1994) is proposed to offer different learning approaches to eliminate long training time and provide generalized ability in function approximation. Its limitation is that it does not work well on large data remodeling. Broad learning systems (BLS) algo- rithm is proposed to handle large data sizes and model them in a dynamically step- wise manner. The BLS can also handle raw data in high dimensions directly to a neural network. The BLS takes raw features as inputs. The proposed hyper-enhanced learning system (HELS) is constructed based on the idea of BLS. Furthermore, the HELS takes extracted physiological features as inputs. These features can effectively and simultaneously generate enhanced feature nodes serving as weight to the originally extracted features, and are more informative for emotion state classification. 1.1 Emotion Work and Its Relation to Affective States The management of one’s personal feelings is defined as emotion work (Zapf et al., 2021). The two types of emotion work comprise evocation and suppression. Emotion evocation requires obtaining and brining up subjective feelings (Chu et al., 2017). Emotion suppression requires withholding or hiding certain feelings (Chiang et al., 2021; Schouten et al., 2020). These feelings may be positive or negative (Fresco et al., 2014). Emotion work is completed by a person, others upon the person, or the Hyper-Enhanced Feature Learning System for Emotion Recognition 3 person upon others. This is done to achieve a certain level of belief satisfactory to oneself. Emotion work can be categorized into three specific types. They include cognitive, bodily, and expressive types. Cognitive relates to or involves images, bodily relates to, or belongs to physical changes of the body, and expressive relates to gestures. For example, a fearful person uses expressive emotion work to enhance their confidence and strength by lifting their shoulders high and putting on a smile. A stressed person may use bodily emotion work by breathing slower to lower stress levels. Emotion work allows us to regulate our feelings so that the emotions suit our current state of mind and are viewed as appropriate. Since we want to maintain a good relationship with our colleagues, we constantly are working on our feelings to suit the current situations we find ourselves in. This study on emotion recognition is to identify evocative and suppressive emotions by extracting relevant features from physiological signals and empowering the features through the proposed hyper- enhanced learning system. This will be useful in the development and design of systems and biomarkers for early clinical assessments and interventions. Emotions are the grassroots of the daily living of a human being and play a very crucial role in human cognition, namely rational decision-making, perception, human interactions, and human intelligence (Johnson et al., 2020; Luo et al., 2020). In recent decades, research on emotion has increased and contributed immensely to fields such as psychology, medicine, history, sociology of emotions, and computer science. These attempts to explain the origin, purpose, and other areas of emotion have promoted more intense studies on this topic though more is needed to be done to address key issues. Furthermore, emotions have been widely ignored especially in the field of HCI (Ren & Bao, 2020; Yun et al., 2021). 1.2 Qualitative Approach to Emotion Recognition Emotion recognition involves the process of identifying human affect. The recogni- tion task can be summarized as an automatic classification of human emotions from images or video sequences. People largely vary in their accuracy at recognizing other people’s emotions. Current research in emotion recognition involves design- ing and using technologies to help understand and predict human state of mind. These technologies work best when multiple modalities are investigated. Most works on emotion recognition to date have been conducted on automating facial expression recognition (FER) from videos, vocal expressions from audio, written expressions from texts, and physiology measured from wearable biomarkers. These signals over the decades of scientific research have been tested and research has been conducted to develop and evaluate methods for automatic emotion classifica- tion. Also, due to the potential of HCI systems and the attention drawn to their importance, researchers around the world are making efforts aimed at finding better and more appropriate ways to uniformly build relationships between the way com- puters and humans interact. To build a system for HCI, knowledge of emotional states of subjects must be known. Again, interest in emotion recognition is

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.