How people talk and understand each other has always fascinated me. The big question for me is how to develop technology that understands spoken language: how can we make automatic speech recognition more intelligent?. Besides what is said, there is also a lot of information in how something is said: aspects of physical, emotional, and mental states resonate in the voice, both consciously and unconsciously. I am particularly interested in the automatic interpretation of this implicit information, with the aim of, for example, enabling conversational agents (such as Siri) to respond more appropriately to children or older adults, or to develop apps that offer remote support to people suffering from depression.

After studying Linguistics (specialisation Language and Speech Technology) in Utrecht, I ended up at TNO where I investigated automatic emotion recognition in speech. Then I went to the University of Twente, Human Media Interaction where I am still working on the automatic analysis of nonverbal aspects in speech communication (e.g. laughing, backchanneling) in human-human, and human-machine interaction. Besides doing research I also teach speech processing, affective computing, and interaction technology.

Expertise

  • Computer Science

    • Robot
    • Detection
    • Speech Emotion Recognition
    • Annotation
  • Psychology

    • Emotion
    • Humans
    • Behavior
    • Conversation

Organisations

My research mainly focuses on automatically analyzing and interpreting nonverbal aspects in speech communication that say something about how the conversation is going, and what someone's physical, socio-emotional, and mental state is. My goal is to make automatic speech recognition more intelligent. Among other things, I have worked on automatic detection of laughter, automatic emotion recognition in speech, and automatic generation of backchannels for artificial agents. Currently, I am supervising a number of PhD students who are researching multimodal emotion expression in older adults, and responsible design for child-robot interaction. I am also supervising master students in their research on technology for vulnerable people (e.g. people with dementia, people with multiple disabilities), and human-robot interaction.

You can also read more about my research here https://www.utwente.nl/en/research/researchers/featured-scientists/truong/index/ and on my personal website http://khiettruong.space/ 

Publications

2024
A Conversational Robot for Children’s Access to a Cultural Heritage Multimedia ArchiveIn Advances in Information Retrieval: 46th European Conference on Information Retrieval, ECIR 2024, Glasgow, UK, March 24–28, 2024, Proceedings, Part V (pp. 144–151). Springer. Beelen, T., Ordelman, R., Truong, K. P., Evers, V. & Huibers, T.https://doi.org/10.1007/978-3-031-56069-9_11
2023
Automated speech audiometry: Can it work using open-source pre-trained Kaldi-NL automatic speech recognition?. ArXiv.org. Araiza-Illan, G., Meyer, L., Truong, K. P. & Baskent, D.https://doi.org/10.1177/23312165241229057Children’s Trust in Robots and the Information They ProvideIn CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Article 66 (pp. 1-7). ACM Publishing. Beelen, T. H. J., Velner, E., Truong, K. P., Ordelman, R. J. F., Huibers, T. W. C. & Evers, V.https://doi.org/10.1145/3544549.3585801Natural Language Processing Markers for Psychosis and Other Psychiatric Disorders: Emerging Themes and Research Agenda From a Cross-Linguistic Workshop, S86-S92. Corona Hernández, H., Corcoran, C., Achim, A. M., De Boer, J. N., Boerma, T., Brederoo, S. G., Cecchi, G. A., Ciampelli, S., Elvevåg, B., Fusaroli, R., Giordano, S., Hauglid, M., van Hessen, A., Hinzen, W., Homan, P., de Kloet, S. F., Koops, S., Kuperberg, G. R., Maheshwari, K., … Palaniyappan, L.https://doi.org/10.1093/schbul/sbac215Robot-Supported Information Search: Which Conversational Interaction Style do Children Prefer?In HRI '23: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (pp. 466–470). ACM Publishing. Sharma, S., Beelen, T. & Truong, K. P.https://doi.org/10.1145/3568294.3580128Acoustic speech markers for schizophrenia-spectrum disorders: A diagnostic and symptom-recognition tool, 1302-1312. de Boer, J. N., Voppel, A. E., Brederoo, S. G., Schnack, H. G., Truong, K. P., Wijnen, F. N. K. & Sommer, I. E. C.https://doi.org/10.1017/S0033291721002804Effects of perceived gender on the perceived social function of laughterIn Proceedings of INTERSPEECH 2023 (pp. 1878-1882). International Speech Communication Association (ISCA). Arts, J. & Truong, K. P.https://doi.org/10.21437/Interspeech.2023-846Laughter in task-based settings: Whom we talk to affects how, when, and how often we laughIn Proceedings of INTERSPEECH 2023 (pp. 3622-3626). International Speech Communication Association (ISCA). Branco, C., Trancoso, I., Infante, P. & Truong, K. P.https://doi.org/10.21437/Interspeech.2023-1914Acoustic characteristics of depression in older adults’ speech: The role of covariatesIn Proceedings of INTERSPEECH 2023 (pp. 4159-4163). International Speech Communication Association (ISCA). Mijnders, C., Janse, E., Naarding, P. & Truong, K. P.https://doi.org/10.21437/Interspeech.2023-839
2022

Research profiles

Affiliated study programs

Courses academic year 2023/2024

Courses in the current academic year are added at the moment they are finalised in the Osiris system. Therefore it is possible that the list is not yet complete for the whole academic year.

Courses academic year 2022/2023

Current projects

Advancing technology for multimodal analysis of emotion expression in dementia

Multimodal analysis of emotional expression in spoken memories of older adults, lifestory books, reminiscence therapy

Children and AI: talking trust and responsible spoken search

CHATTERS

Responsible design in child-robot-media interaction, spoken interaction between child and conversational agent

4TU Humans & Technology: Smart Social Systems and Spaces for Living Well

Social signal processing and affective computing in speech

Finished projects

EU-FP7 SQUIRREL (Clearing Clutter Bit by Bit)

Robot that helps children tidying up, social signal processing in child-robot interaction,

COMMIT P3 SENSEI

Exercise intensity detection through voice, running app

EU-FP7 SSPNet (Social Signal Processing Network)

Automatic analysis of laughter, backchannel generation, speech synchrony

Address

University of Twente

Citadel (building no. 09), room H235
Hallenweg 15
7522 NH Enschede
Netherlands

Navigate to location

Organisations

Scan the QR code or
Download vCard