M. Nauta MSc (Meike)


About Me

I am a PhD Candidate at the Data Science group of the University of Twente, the Netherlands. My research interests include explainable artificial intelligence, deep learning, causal discovery and data mining.

Daily life is increasingly governed by decisions made by algorithms due to the growing availability of big data sets. Most machine learning algorithms are black-box models, i.e. they give no insight into how they reach their outcomes which prevents users from trusting the model. If we cannot understand the reasons for their decisions, how can we be sure that the decisions are correct? What if they are wrong, discriminating or amoral?
I aim to create new machine learning methods that can explain their decision making process, in order for users to understand the reasons behind a prediction. Those explanations enable the user to check for correctness, fairness and robustness, and can also be useful for knowledge discovery.


Engineering & Materials Science
Convolutional Neural Networks
Decision Making
Deep Learning
Image Recognition
Neural Networks
Image Recognition


Paalvast, O. , Nauta, M., Koelle, M., Geerdink, J., Vijlbrief, O., Hegeman, J. H. , & Seifert, C. (2022). Radiology report generation for proximal femur fractures using deep classification and language generation models. Artificial intelligence in medicine, 128, [102281]. https://doi.org/10.1016/j.artmed.2022.102281
Nauta, M., Jutte, A., Provoost, J. , & Seifert, C. (2022). This Looks Like That, Because.. Explaining Prototypes for Interpretable Image Recognition. In M. Kamp, M. Kamp, I. Koprinska, A. Bibal, T. Bouadi, B. Frénay, L. Galárraga, J. Oramas, L. Adilova, Y. Krishnamurthy, B. Kang, C. Largeron, J. Lijffijt, T. Viard, P. Welke, M. Ruocco, E. Aune, C. Gallicchio, G. Schiele, F. Pernkopf, M. Blott, H. Fröning, G. Schindler, R. Guidotti, A. Monreale, S. Rinzivillo, P. Biecek, E. Ntoutsi, M. Pechenizkiy, B. Rosenhahn, C. Buckley, D. Cialfi, P. Lanillos, M. Ramstead, T. Verbelen, P. M. Ferreira, G. Andresini, D. Malerba, I. Medeiros, P. Fournier-Viger, M. S. Nawaz, S. Ventura, M. Sun, M. Zhou, V. Bitetta, I. Bordino, A. Ferretti, F. Gullo, G. Ponti, L. Severini, R. Ribeiro, J. Gama, R. Gavaldà, L. Cooper, N. Ghazaleh, J. Richiardi, D. Roqueiro, D. Saldana Miranda, K. Sechidis, ... G. Graça (Eds.), Machine Learning and Principles and Practice of Knowledge Discovery in Databases - International Workshops of ECML PKDD 2021, Proceedings (Vol. 1524, pp. 441-456). (Communications in Computer and Information Science; Vol. 1524 CCIS). Springer Science + Business Media. https://doi.org/10.1007/978-3-030-93736-2_34
Nauta, M., van Bree, R. , & Seifert, C. (2021). Intrinsically Interpretable Image Recognition with Neural Prototype Trees. Abstract from Beyond Fairness: Towards a Just, Equitable, and Accountable Computer Vision, Online Event.
Nauta, M., van Bree, R. , & Seifert, C. (2021). Neural Prototype Trees for Interpretable Fine-Grained Image Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 14933-14943). IEEE. https://doi.org/10.1109/CVPR46437.2021.01469
Nauta, M. , Putten, M. J. A. M. V. , Tjepkema-Cloostermans, M. C., Bos, J. P. , Keulen, M. V. , & Seifert, C. (2020). Interactive Explanations of Internal Representations of Neural Network Layers: An Exploratory Study on Outcome Prediction of Comatose Patients. In K. Bach, R. Bunescu, C. Marling, & N. Wiratunga (Eds.), KDH 2020: 5th International Workshop on Knowledge Discovery in Healthcare Data (pp. 5-11). (CEUR Workshop Proceedings; Vol. 2675). CEUR. http://ceur-ws.org/Vol-2675/
Theodorus, A. , Nauta, M. , & Seifert, C. (2020). Evaluating CNN interpretability on sketch classification. In W. Osten, D. Nikolaev, & J. Zhou (Eds.), 12th International Conference on Machine Vision, ICMV 2019 [114331Q] (Proceedings of SPIE - The International Society for Optical Engineering; Vol. 11433). SPIE Press. https://doi.org/10.1117/12.2559536
Peters, M., Kempen, L. , Nauta, M. , & Seifert, C. (2019). Visualising the Training Process of Convolutional Neural Networks for Non-Experts. Paper presented at 31st Benelux Conference on Artificial Intelligence, BNAIC 2019, Brussels, Belgium.

UT Research Information System

Google Scholar Link

Contact Details

Visiting Address

University of Twente
Faculty of Electrical Engineering, Mathematics and Computer Science
Zilverling (building no. 11), room 4055
Hallenweg 19
7522NH  Enschede
The Netherlands

Navigate to location

Mailing Address

University of Twente
Faculty of Electrical Engineering, Mathematics and Computer Science
Zilverling  4055
P.O. Box 217
7500 AE Enschede
The Netherlands