publications [.bib]

[48]
A. Katsamanis, I. Rodomagoulakis, G. Potamianos, P. Maragos and A. Tsiami. Robust Far-Field Spoken Command Recognition for Home Automation Combining Adaptation and Multichannel Processing. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2014.
[47]
C.-C. Lee, A. Katsamanis, M. Black, B. R. Baucom, A. Christensen, P. G. Georgiou, S. Narayanan. Computing Vocal Entrainment: A Signal-derived PCA-based Quantification Scheme with Application to Affect Analysis in Married Couple Interactions. Computer Speech and Language, 2012 (in press).
[46]
A. Metallinou, A. Katsamanis, and S. Narayanan. Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information. Image and Vision Computing, 31:137-152, 2013.
[pdf]
[45]
A. Tsiartas, T. Chaspari, A. Katsamanis, P. Ghosh, M. Li, M. Van Segbroeck, A. Potamianos, and S. Narayanan. Multi-band long-term signal variability features for robust voice activity detection In Proc. Int'l Conf. on Speech Communication and Technology, 2013.
[pdf]
[44]
C.C. Lee, A. Katsamanis, B. Baucom, P. Georgiou, and S. Narayanan. Using Measures of Vocal Entrainment to Inform Outcome-Related Behaviors in Marital Conflicts. In Proc. of the Asia-Pacific Signal and Information Processing Association Conference (APSIPA) , 2012.
[pdf]
[43]
D. Traum, P. Aggarwal, R. Artstein, S. Foutz, J. Gerten, A. Katsamanis, D. Noren, W. Swartout. Ada and Grace: Direct Interaction with Museum Visitors. In Proc. of Intelligent Virtual Agents Conference (IVA) , 2012.
[pdf]
[42]
P. Aggarwal, R. Artstein, J. Gerten, A. Katsamanis, S. Narayanan, A. Nazarian, D. Traum. The Twins corpus of museum visitor questions. In Proc. of the Language Resources and Evaluation Conference (LREC) [pdf] , 2012.
[41]
C.C. Lee, A. Katsamanis, P. Georgiou, and S. Narayanan. Based on Isolated Saliency or Causal Integration? Toward a Better Understanding of Human Annotation Process using Multiple Instance Learning and Sequential Probability Ratio Test. In Proc. Int'l Conf. on Speech Communication and Technology, 2012.
[pdf]
[40]
A. Metallinou, A. Katsamanis, S. Narayanan. A hierarchical framework for modeling multimodality and emotional evolution in affective dialogs. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2012.
[pdf]
[39]
M. Wöllmer, A. Metallinou, A. Katsamanis, B. Schuller, S. Narayanan. Analyzing the memory of BLSTM neural networks for enhanced emotion classification in dyadic spoken interactions. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2012.
[pdf]
[38]
T. Chaspari, E. Mower Provost, A. Katsamanis, S. Narayanan. An acoustic analysis of shared enjoyment in ECA interactions of children with autism. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2012.
[pdf]
[37]
M. Black, A. Katsamanis, B. Baucom, C.-C. Lee, and A. Lammert, and A. Christensen, P. Georgiou, S. Narayanan. Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features. Speech Communication, 55:1-21, 2013.
[36]
A. Metallinou, M. Wöllmer, A. Katsamanis, F. Eyben, B. Schuller, and S. Narayanan. Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification. IEEE Trans. Affective Computing, 3:184-198, 2012.
[35]
A. Katsamanis, J. Gibson, M. Black, and S. Narayanan. Multiple instance learning for classification of human behavior observations. In Proc. of International Conference on Affective Computing and Intelligent Interactions (ACII), 2011.
[pdf] [presentation]
[34]
C.-C. Lee, A. Katsamanis, Black M., P. Georgiou, and S. Narayanan. Affective state recognition in married couples' interactions using PCA-based vocal entrainment measures with multiple instance learning. In Proc. of International Conference on Affective Computing and Intelligent Interactions (ACII), 2011.
[pdf] [presentation]
[33]
M. Black, P. Georgiou, A. Katsamanis, B. Baucom, and S. Narayanan. "You made me do it": Classification of blame in married couples' interaction by fusing automatically derived speech and language information. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[32]
J. Gibson, A. Katsamanis, M. Black, and S. Narayanan. Automatic identification of salient acoustic instances in couples' behavioral interactions using diverse density support vector machines. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[31]
A. Katsamanis, M. Black, P. Georgiou, L. Goldstein, and S. Narayanan. SailAlign: Robust long speech-text alignment. In Workshop on New Tools and Methods for Very-Large Scale Phonetics Research, 2011.
[pdf] [presentation] [software]
[30]
A. Katsamanis, E. Bresch, V. Ramanarayanan, and S. Narayanan. Validating rt-MRI based articulatory representations via articulatory recognition. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf] [presentation] [research]
[29]
A. Lammert, M. Proctor, A. Katsamanis, and S. Narayanan. Morphological variation in the adult vocal tract: A modeling study of its potential acoustic impact. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[28]
C.-C. Lee, A. Katsamanis, M. Black, B. Baucom, P. Georgiou, and S. Narayanan. An analysis of pca-based vocal entrainment measures in married couples' i affective spoken interactions. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[27]
A. Metallinou, A. Katsamanis, Y. Wang, and S. Narayanan. Tracking changes in continuous emotion states using body language and prosodic cues. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2011.
[pdf]
[26]
S. Narayanan, E. Bresch, P. Ghosh, L. Goldstein, A. Katsamanis, Y. Kim, A. Lammert, M. Proctor, V. Ramanarayanan, and Y. Zhu. A multimodal real-time mri articulatory corpus for speech research. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[25]
M. Proctor, A. Lammert, A. Katsamanis, L. Goldstein, C. Hagedorn, and S. Narayanan. Direct estimation of articulatory kinematics from real-time magnetic resonance image sequences. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[24]
V. Ramanarayanan, A. Katsamanis, and S. Narayanan. Automatic data-driven learning of articulatory primitives from real-time MRI data using convolutive NMF with sparseness constraints. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[23]
V. Rozgic, B. Xiao, A. Katsamanis, B. Baucom, P. Georgiou, and S. Narayanan. Estimation of ordinal approach-avoidance labels in dyadic interation: ordinal logistic regression approach. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2011.
[pdf]
[22]
B. Xiao, V. Rozgic, A. Katsamanis, B. Baucom, P. Georgiou, and S. Narayanan. Acoustic and visual cues of turn-taking dynamics in dyadic interactions. In Proc. Int'l Conf. on Speech Communication and Technology, 2011.
[pdf]
[21]
M. P. Black, A. Katsamanis, C.-C. Lee, A. C. Lammert, B. R. Baucom, A. Christensen, P. G. Georgiou, and S. Narayanan. Automatic classification of married couples' behavior using audio features. In Proc. Int'l Conf. on Speech Communication and Technology, 2010.
[pdf]
[20]
E. Bresch, A. Katsamanis, L. Goldstein, and S. Narayanan. Statistical multi-stream modeling of real-time mri articulatory speech data. In Proc. Int'l Conf. on Speech Communication and Technology, Makuhari, Japan, 2010.
[pdf] [presentation]
[19]
C.-C. Lee, M. P. Black, A. Katsamanis, A. C. Lammert, B. R. Baucom, A. Christensen, P. G. Georgiou, and S. Narayanan. Quantification of prosodic entrainment in affective spontaneous spoken interactions of married couples. In Proc. Int'l Conf. on Speech Communication and Technology, 2010.
[pdf]
[18]
M. Proctor, D. Bone, A. Katsamanis, and S. Narayanan. Rapid semi-automatic segmentation of real-time magnetic resonance images for parametric vocal tract analysis. In Proc. Int'l Conf. on Speech Communication and Technology, 2010.
[pdf]
[17]
V. Rozgic, B. Xiao, A. Katsamanis, B. Baucom, P. Georgiou, and S. Narayanan. A new multichannel multimodal dyadic interaction database. In Proc. Int'l Conf. on Speech Communication and Technology, 2010.
[pdf]
[16]
A. Katsamanis, G. Papandreou, and P. Maragos. Face active appearance modeling and speech acoustic information to recover articulation. IEEE Trans. Audio, Speech, and Language Processing, 17:411–422, 2009.
[pdf] [research] [doi]
[15]
G. Papandreou, A. Katsamanis, V. Pitsikalis, and P. Maragos. Adaptive multimodal fusion by uncertainty compensation with application to audio-visual speech recognition. IEEE Trans. Audio, Speech, and Language Processing, 17:423–435, 2009.
[pdf] [research] [doi]
[14]
A. Roussos, A. Katsamanis, and P. Maragos. Tongue tracking in ultrasound images with active appearance models. In Proc. IEEE Int'l Conf. on Image Processing, 2009.
[pdf]
[13]
S. Theodorakis, A. Katsamanis, and P. Maragos. Product-hmms for automatic sign language recognition. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2009.
[pdf]
[12]
A. Katsamanis, G. Ananthakrishnan, G. Papandreou, P. Maragos, and O. Engwall. Audiovisual speech inversion by switching dynamical modeling governed by a hidden Markov process. In Proc. European Signal Processing Conference, 2008.
[pdf] [presentation]
[11]
A. Katsamanis, G. Papandreou, and P. Maragos. Audiovisual-to-articulatory speech inversion using active appearance models for the face and hidden Markov models for the dynamics. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2008.
[pdf] [presentation]
[10]
A. Katsamanis, A. Roussos, P. Maragos, M. Aron, and M.-O. Berger. Inversion from audiovisual speech to articulatory information by exploiting multimodal data. In Proc. Int'l Seminar on Speech Production, 2008.
[pdf] [presentation][research]
[9]
S. Lefkimmiatis, A. Katsamanis, and P. Maragos. Multisensor multiband cross-energy tracking for feature extraction and recognition. In Proc. IEEE Int'l Conf. Acous., Speech, and Signal Processing, 2008.
[pdf]
[8]
P. Maragos, A. Gros, A. Katsamanis, and G. Papandreou. Cross-modal integration for performance improving in multimedia: A review. In Multimodal Processing and Interaction: Audio, Video, Text. Springer-Verlag, 2008.
[doi]
[7]
G. Papandreou, A. Katsamanis, V. Pitsikalis, and P. Maragos. Adaptive multimodal fusion by uncertainty compensation with application to audio-visual speech recognition. In Multimodal Processing and Interaction: Audio, Video, Text. Springer-Verlag, 2008.
[doi]
[6]
A. Katsamanis, G. Papandreou, and P. Maragos. Audiovisual-to-articulatory speech inversion using HMMs. In Proc. of IEEE Int'l Workshop on Multimedia Signal Processing, 2007.
[pdf] [presentation]
[5]
A. Katsamanis, P. Tsiakoulis, P. Maragos, and A. Potamianos. Investigations in articulatory synthesis. In Proc. Int'l Congress on Phonetic Sciences, 2007.
[pdf] [presentation]
[4]
G. Papandreou, A. Katsamanis, V. Pitsikalis, and P. Maragos. Multimodal fusion and learning with uncertain features applied to audiovisual speech recognition. In Proc. of IEEE Int'l Workshop on Multimedia Signal Processing, 2007.
[pdf]
[3]
A. Katsamanis, G. Papandreou, V. Pitsikalis, and P. Maragos. Multimodal fusion by adaptive compensation for feature uncertainty with application to audiovisual speech recognition. In Proc. European Signal Processing Conference, 2006.
[pdf]
[2]
V. Pitsikalis, A. Katsamanis, G. Papandreou, and P. Maragos. Adaptive multimodal fusion by uncertainty compensation. In Proc. Int'l Conf. on Speech Communication and Technology, pages 2458–2461, 2006.
[pdf]
[1]
A. Katsamanis and P. Maragos. Advances in statistical estimation and tracking of AM-FM speech components,. In Proc. Int'l Conf. on Speech Communication and Technology, 2005.
[pdf][presentation]

Theses

[1]
A. Katsamanis, Computational Speech Modeling Exploiting Aeroacoustics in the Vocal Tract (in greek), PhD Thesis, School of E.C.E, National Technical University of Athens, Advisor: Prof. P. Maragos
[pdf]
[2]
A. Katsamanis,, Statistical Multiband Demodulation and Speech Analysis using Kalman Filters and Energy Related Methods(in greek), Diploma Thesis, School of E.C.E, National Technical University of Athens, Supervisor: Prof. P. Maragos
[pdf]