Curriculum Vitae

Education

  • 1984-1993: Gymnasium (Steinfurt, Germany)
  • 1994-2001: Diploma Psychology (Regensburg, Germany)
  • 1996-2003: Subsidiary subject: Business Computing
  • 2009: Promotion, Passau University, summa cum laude. Doctoral thesis: Messung, Steuerung und Effektivität des Usability Evaluationsprozesses (Measurement, Control and Effectiveness of the Usability Evaluation Processes)

Work experience

  • 2001-2003: Researcher at the Library of University Regensburg 
  • 2003-2006: Researcher at Fraunhofer Institute Experimental Software Engineering (Kaiserslautern, Germany)
  • 2007-2008: Stipend at Passau Graduate School of Business and Economics (Passau, Germany) 
  • 2008-2009: Lecturer in Business Computing at Passau University
  • 2009 – now: Assistant professor at department Cognitive Psychology & Ergonomics (CPE) at University of Twente, permanent contract since 08/2011

Expertise

  • Computer Science

    • Usability
    • User
    • Testing
    • Simulation
    • Validation
  • Psychology

    • Tools
    • Adaptation
    • Semantics

Organisations

The research I am doing is in the field of Human-Computer Interaction. HCI is a member of the Human Factors family of research disciplines. I am particularly interested into the following topics:

Effectiveness of Usability Evaluation

Imagine you are developing a medical infusion pump, or any other device where people can suffer serious harm if the device isn’t designed properly. It has become good practice to do validation tests on these devices, which basically is a usability test to show that the device can be operated safely.

In the past, I examined the question of “How many users does a usability testing study require?” I analyzed a good dozen of data sets from usability studies and went deep into the mathematics of the problem. The short answer: Testing five users is not enough and magic numbers are strictly hocus-pocus. Previously suggested formulas are flawed. In my papers I develop a mathematical model for accurately estimating the progress of usability evaluation.

Further questions concern the comparison of usability evaluation methods. They appear to primarily differ qualitatively. For example, inspections do not find less usability problems, but different ones. My conclusion: A mix of usability evaluation methods is most effective.

Geekism

Most software designers nowadays assume that users are primarily driven by two needs: (1) users want to achieve their goals with as little effort as possible. This is called the utilitarian need. (2) users thrive for experience, which is called the hedonistic or experiential drive. Let’s take as an example a GPS based navigation app. In some situations a user may primarily be interested in being directed to a target with the shortest possible route. In other situations, users may have more interested in being prompted for points of interest. It is the experience of exploring new places that makes the appeal.

These two perspectives are important in understanding why people use technology, but they are not complete. In both perspectives, technology is a pure means. For some individuals, a piece of technology is appealing by itself. These individuals, who we call geeks are more than just users or consumers of technology. They have an inner drive to explore, understand and modify (or even re-crate) technology. While the real geeks are a minority, I assume that geek tendencies can be observed in large parts of the population. For example, think of individuals who spent a lot of time in customizing their new smart phones. Or, consider all the young persons who help their parents and grandparents keep pace with the modern age.

  • The goals of this research theme are to
  • identify the traits underlying geekism
  • find valid measures for geekism
  • develop a theory of design for geeks

Diversity in human-computer interaction

Imagine, you enter a shoe shop and ask for a pair in size 46. The saleslady, with an apologetic smile, informs you that all shoes are only available in size 43, because this is the average foot size of male Central Europeans. What sounds absurd in this example happens frequently in design of interactive products: a confusion of the average with the typical. This exactly is my working definition: diversity is when the average is not the typical. Diversity applies for the three factors of usability alike: users, tasks and systems.

While many researchers care little about real-world impact of what they find in their labs, the way I do diversity research is quantitative. If for example, a researcher claims that users with a lower working memory capacity are slower at some information browsing tasks in the lab. Then, my questions are: By how much? Does it matter? Does it matter in real-world tasks?

Learning in HCI

Learnability is an important criterion for usability. Learning is characterized by change over time. Very surprising, only few studies examined the change of performance or satisfaction in repeated trials. Interesting questions are: How quickly do users get acquainted with a new system? Can they learn to work around usability problems? Do elderly users learn slower?

Users'mental models

Imagine you want to design a university website which contains thousands of information pieces. How to best organize information such that users easily find their way? A well-known method for eliciting mental models is card sorting. Simply spoken: you let a number of users group information pieces into groups. Two items that are often placed together should also become close neighbors on the website. This way one can determine the optimal navigation structure.

But wait! Card sorting is widely used, but is not fully validated. For example: Do users take longer on a website that doesn’t match their metal model? What if users are diverse, i.e. there is more than one mental model? What is the most efficient way to run card sorting studies and analyze the data?

Publications

2024

Exploring postictal recovery with acetaminophen or nimodipine: A randomized-controlled crossover trial (2024)Annals of Clinical and Translational Neurology, 11(9), 2289-2300. Pottkämper, J. C. M., Verdijk, J. P. A. J., Stuiver, S., Aalbregt, E., ten Doesschate, F., Verwijk, E., Schmettow, M., van Wingen, G. A., van Putten, M. J. A. M., Hofmeijer, J. & van Waarde, J. A.https://doi.org/10.1002/acn3.52143Exploring the influence of ground-dwelling ant bioturbation activity on physico-chemical, biological properties and heavy metal pollution in coal mine spoil (2024)Pedobiologia, 104. Article 150960. Khan, S. R., Singh, P. C., Schmettow, M., Singh, S. K. & Rastogi, N.https://doi.org/10.1016/j.pedobi.2024.150960

2023

Ciao AI: the Italian adaptation and validation of the Chatbot Usability Scale (2023)Personal and ubiquitous computing, 27, 2161-2170. Borsci, S., Prati, E., Malizia, A., Schmettow, M., Chamberlain, A. & Federici, S.https://doi.org/10.1007/s00779-023-01731-2Opportunities and challenges of using simulation technology and wearables for skill assessment (2023)[Contribution to conference › Abstract] 28th Annual Meeting of the Society for Simulation Applied to Medicine, SESAM 2023. Groenier, M., Schmettow, M., Halfwerk, F. R. & Endedijk, M.A confirmatory factorial analysis of the Chatbot Usability Scale: A multilanguage validation (2023)Personal and ubiquitous computing, 27, 317-330. Borsci, S., Schmettow, M., Malizia, A., Chamberlain, A. & Van Der Velde, F.https://doi.org/10.1007/s00779-022-01690-0Seizure duration predicts postictal electroencephalographic recovery after electroconvulsive therapy-induced seizures (2023)Clinical neurophysiology, 148, 1-8. C. M. Pottkämper, J., P. A. J. Verdijk, J., Stuiver, S., Aalbregt, E., Schmettow, M., Hofmeijer, J., van Waarde, J. A. & J. A. M. van Putten, M.https://doi.org/10.1016/j.clinph.2023.01.008

2022

“Ciao AI”: The Italian adaptation and validation of the Chatbot Usability Scale (2022)[Working paper › Preprint]. PsyArXiv. Borsci, S., Prati, E., Federici, S., Malizia, A., Schmettow, M. & Chamberlain, A.https://doi.org/10.31234/osf.io/3hcgyImproving clarity, cooperation and driver experience in lane change manoeuvres (2022)Transportation Research Interdisciplinary Perspectives, 13. Article 100553. Haar, A., Haeske, A. B., Kleen, A., Schmettow, M. & Verwey, W. B.https://doi.org/10.1016/j.trip.2022.100553The Chatbot Usability Scale: the Design and Pilot of a Usability Scale for Interaction with AI-Based Conversational Agents (2022)Personal and ubiquitous computing, 26(1), 95-119. Borsci, S., Malizia, A., Schmettow, M., Van Der Velde, F., Tariverdiyeva, G., Balaji, D. & Chamberlain, A.https://doi.org/10.1007/s00779-021-01582-9

Research profiles

Address

University of Twente

Cubicus (building no. 41), room B324
De Zul 10
7522 NJ Enschede
Netherlands

Navigate to location

Organisations

Scan the QR code or
Download vCard