Welcome...

M. Schmettow (Martin)

Assistant Professor

About Me

Curriculum Vitae

Education

  • 1984-1993: Gymnasium (Steinfurt, Germany)
  • 1994-2001: Diploma Psychology (Regensburg, Germany)
  • 1996-2003: Subsidiary subject: Business Computing
  • 2009: Promotion, Passau University, summa cum laude. Doctoral thesis: Messung, Steuerung und Effektivität des Usability Evaluationsprozesses (Measurement, Control and Effectiveness of the Usability Evaluation Processes)

Work experience

  • 2001-2003: Researcher at the Library of University Regensburg 
  • 2003-2006: Researcher at Fraunhofer Institute Experimental Software Engineering (Kaiserslautern, Germany)
  • 2007-2008: Stipend at Passau Graduate School of Business and Economics (Passau, Germany) 
  • 2008-2009: Lecturer in Business Computing at Passau University
  • 2009 – now: Assistant professor at department Cognitive Psychology & Ergonomics (CPE) at University of Twente, permanent contract since 08/2011

Expertise

Medicine & Life Sciences
Equipment Design
Infusion Pumps
Engineering & Materials Science
Biomedical Equipment
Head-Up Displays
Pumps
Robotic Surgery
Semantics
Testing

Research

The research I am doing is in the field of Human-Computer Interaction. HCI is a member of the Human Factors family of research disciplines. I am particularly interested into the following topics:

Effectiveness of Usability Evaluation

Imagine you are developing a medical infusion pump, or any other device where people can suffer serious harm if the device isn’t designed properly. It has become good practice to do validation tests on these devices, which basically is a usability test to show that the device can be operated safely.

In the past, I examined the question of “How many users does a usability testing study require?” I analyzed a good dozen of data sets from usability studies and went deep into the mathematics of the problem. The short answer: Testing five users is not enough and magic numbers are strictly hocus-pocus. Previously suggested formulas are flawed. In my papers I develop a mathematical model for accurately estimating the progress of usability evaluation.

Further questions concern the comparison of usability evaluation methods. They appear to primarily differ qualitatively. For example, inspections do not find less usability problems, but different ones. My conclusion: A mix of usability evaluation methods is most effective.

Geekism

Most software designers nowadays assume that users are primarily driven by two needs: (1) users want to achieve their goals with as little effort as possible. This is called the utilitarian need. (2) users thrive for experience, which is called the hedonistic or experiential drive. Let’s take as an example a GPS based navigation app. In some situations a user may primarily be interested in being directed to a target with the shortest possible route. In other situations, users may have more interested in being prompted for points of interest. It is the experience of exploring new places that makes the appeal.

These two perspectives are important in understanding why people use technology, but they are not complete. In both perspectives, technology is a pure means. For some individuals, a piece of technology is appealing by itself. These individuals, who we call geeks are more than just users or consumers of technology. They have an inner drive to explore, understand and modify (or even re-crate) technology. While the real geeks are a minority, I assume that geek tendencies can be observed in large parts of the population. For example, think of individuals who spent a lot of time in customizing their new smart phones. Or, consider all the young persons who help their parents and grandparents keep pace with the modern age.

  • The goals of this research theme are to
  • identify the traits underlying geekism
  • find valid measures for geekism
  • develop a theory of design for geeks
Diversity in human-computer interaction

Imagine, you enter a shoe shop and ask for a pair in size 46. The saleslady, with an apologetic smile, informs you that all shoes are only available in size 43, because this is the average foot size of male Central Europeans. What sounds absurd in this example happens frequently in design of interactive products: a confusion of the average with the typical. This exactly is my working definition: diversity is when the average is not the typical. Diversity applies for the three factors of usability alike: users, tasks and systems.

While many researchers care little about real-world impact of what they find in their labs, the way I do diversity research is quantitative. If for example, a researcher claims that users with a lower working memory capacity are slower at some information browsing tasks in the lab. Then, my questions are: By how much? Does it matter? Does it matter in real-world tasks?

Learning in HCI

Learnability is an important criterion for usability. Learning is characterized by change over time. Very surprising, only few studies examined the change of performance or satisfaction in repeated trials. Interesting questions are: How quickly do users get acquainted with a new system? Can they learn to work around usability problems? Do elderly users learn slower?

Users'mental models

Imagine you want to design a university website which contains thousands of information pieces. How to best organize information such that users easily find their way? A well-known method for eliciting mental models is card sorting. Simply spoken: you let a number of users group information pieces into groups. Two items that are often placed together should also become close neighbors on the website. This way one can determine the optimal navigation structure.

But wait! Card sorting is widely used, but is not fully validated. For example: Do users take longer on a website that doesn’t match their metal model? What if users are diverse, i.e. there is more than one mental model? What is the most efficient way to run card sorting studies and analyze the data?

Publications

Recent
Borsci, S., Prati, E., Malizia, A. , Schmettow, M., Chamberlain, A., & Federici, S. (2023). Ciao AI: the Italian adaptation and validation of the Chatbot Usability Scale. Personal and ubiquitous computing, 27, 2161-2170. https://doi.org/10.1007/s00779-023-01731-2
Groenier, M. , Schmettow, M. , Halfwerk, F. R. , & Endedijk, M. (2023). Opportunities and challenges of using simulation technology and wearables for skill assessment. Abstract from 28th Annual Meeting of the Society for Simulation Applied to Medicine, SESAM 2023, Lisbon, Portugal.
Haar, A., Haeske, A. B., Kleen, A. , Schmettow, M. , & Verwey, W. B. (2022). Improving clarity, cooperation and driver experience in lane change manoeuvres. Transportation Research Interdisciplinary Perspectives, 13, Article 100553. https://doi.org/10.1016/j.trip.2022.100553
Witte, T. E. F. , Schmettow, M. , & Groenier, M. (2019). Diagnostic Requirements for Efficient, Adaptive Robotic Surgery Training. In R. A. Sottilare, & J. Schwarz (Eds.), Adaptive Instructional Systems: 1st International Conference, AIS 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Proceedings (pp. 469-481). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11597 LNCS). Springer. https://doi.org/10.1007/978-3-030-22341-0_37

UT Research Information System

Contact Details

Visiting Address

University of Twente
Faculty of Behavioural, Management and Social Sciences
Cubicus (building no. 41), room B324
De Zul 10
7522NJ  Enschede
The Netherlands

Navigate to location

Mailing Address

University of Twente
Faculty of Behavioural, Management and Social Sciences
Cubicus  B324
P.O. Box 217
7500 AE Enschede
The Netherlands