Immigration版 - [GONE] computer science (OR EE) related journals' review 机会 |
|
|
|
|
|
c****l 发帖数: 1280 | 1 回报本版。 有两个computer science related journals' review. 主要需要人机交互
和模式识别的背景的。 journal还都不错。我现在拿到卡了, 所以不用review了。
如果感兴趣, 请把你的CV发到:
[email protected]
/* */
我可以帮着推荐, 但editor 同意与否不能保证。
另, 如果能发点包子最好, 我的包子发的没多少了 :-)
============manuscript 1=======
Emotion Recognition from EEG Signals by Using Multivariate Empirical Mode
Decomposition
This is the abstract:
This paper explores the advanced properties of empirical mode decomposition
(EMD) and its multivariate extension (MEMD) for emotion
recognition. Since emotion recognition using EEG is a challenging study due
to non-stationary behaviour of the signals caused by complicated neuronal
activity in the brain, sophisticated signal processing methods are required
to extract the hidden patterns in the EEG. In addition, multichannel
analysis is another issue to be considered when dealing with EEG signals.
EMD is a recently proposed iterative method to analyze non-linear and non-
stationary time series. It decomposes a signal into a set of oscillations
called intrinsic mode functions (IMFs) without requiring a set of basis
functions. In this study, a MEMD based feature extraction method is proposed
to process multichannel EEG signals for emotion recognition. The
multichannel IMFs extracted by MEMD are analyzed using various time and
frequency domain techniques such as power ratio, power spectral density,
entropy, Hjorth parameters, and correlation as features of valance and
arousal scales of the participants. The
proposed method is applied to the DEAP emotional EEG data set, and the
results are compared with similar previous studies for benchmarking.
============manuscript 2=======
Enhancing the Accuracy of Gaze Control for Robot Arm Tele-Operation
Abstract:
Modern applications for the control of remote robotic systems require both a
high degree of accuracy, and an efficient, intuitive human-machine
interface. The many degrees of freedom of remotely-operated robots creates a
need for more cognitively efficient operator interfaces, including
interfaces that integrate gaze-based interaction with other interaction
modalities. To this end, a study has been conducted to address two primary
research questions: i) what are the judgemental bounds of depth perception
when a human is targeting a point in 3 dimensional space that is represented
on a two dimensional screen, for directing a remote robot operation, and ii
) can machine learning methods be used to more accurately map human gaze
points on a two dimensional screen in terms of their two and three
dimensional target in the space represented on the screen. The first
question has been explored by evaluating the accuracy with which human
viewers can localize 2D and 3D locations represented on a screen using
varying degrees of interactive scene exploration. The second research
question has been explored using a combination of supervised and
unsupervised machine learning techniques including Fuzzy C Means (FCM) and
an Adaptive Neuro-Fuzzy Inference System (ANFIS). The resulting analysis
quantifies the difficulty that humans and machines have in inferring depths
from 2D or static 3D isometric representations, and significant performance
improvements of using a dynamic 3D representation. The investigation also
demonstrated high accuracy in the machine determination of human visual
deictic references on the screen by applying machine learning to cluster
gaze points measured by an eye gaze tracker. To better understand the
subjective experience of these different interaction scenarios, a series of
questionnaires were used, data from which is also presented. The study
provides evidence that by employing operator gaze data for robot motion
target identification, aided by machine learning techniques, a human
operator can more accurately guide a robotic platform using a virtual 3D
reconstruction of the robot operational environment, rather than directly
using, for e.g., streaming video feeds. |
|
|
|
|
|
|