News
- 29 March 2024, I have one PhD and one PostDoc open positions on scalable energy-efficient deep learning (link and link)
- 29 March 2024, I have one PhD open position on sparse training for deep reinforcement learning (link)
- 19 January 2024, The MISD project was accepted (link). Happy to be part of a larger effort within UT on sustainable data centers.
- 6 November 2023, Alexander's work on "Enhancing Learning in Sparse Neural Networks: A Hebbian Learning Approach" was nominated for the Best Student Thesis Abstract Award at BNAIC2024.
- 22 September, 2023 I gave a keynote talk about "Sparse training of neural networks" at the E-pi: Re-thinking Uncertainty and AI Workshop in TU Delft (link)
- 6 June, 2023 I am doing a short research visit and hold a seminar about "Sparse Training in Deep Reinforcement Learning" at CaSToRC, the National HPC Competence Center - The Cyprus Institute (link)
- 24 April, 2023 Our tutorial "Sparse Training for Supervised, Unsupervised, Continual, and Deep Reinforcement Learning with Deep Neural Networks" has been accepted at IJCAI 2023.
- 11 April, 2023 I gave an invited talk on "Sparsity in neural networks" at the Big Data & AI workshop during the Smart Diaspora 2023 conference in Timisoara, Romania.
- 30 January, 2023 I gave an invited talk on "Scalable and Efficient Agents using Sparse Neural Networks" at the AI for E&S Think Tank, TU Delft (link)
- 4 January, 2023 Our paper "Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning" has been accepted at AAMAS 2023 (link)
- 4 January, 2023 The third edition of the workshop "Sparsity in Neural Networks: On practical limitations and tradeoffs between sustainability and efficiency" - SNN 2023 will be colocated with ICLR 2023 (link)
- 27 October, 2022 I gave an invited talk at the CoMBE Seminar Series, University of Duisburg-Essen
- 14 September, 2022 One paper on sparse training and time series classification accepted at NeurIPS 2022 (link)
- 15 July, 2022 I gave an invited talk during the AI Seminar at the University of Alberta/Alberta Machine Intelligence Institute titled "Sparse training in supervised, unsupervised, and deep reinforcement learning" (link)
- 13 July, 2022 I was recognised as an Outstanding Reviewer at ICML 2022
- 13 July, 2022 We are organising the second edition of the "Sparsity in Neural Networks: Advancing Understanding and Practice" Workshop - SNN 2022 (link)
- 10 June, 2022 I gave an invited talk at Calgary AI, University of Calgary
- 21 May, 2022 I am doing a research visit to the group of Dr. Matthew Taylor at the University of Alberta
- 10 May, 2022 Our paper "Dynamic Sparse Training for Deep Reinforcement Learning" received best paper award at ALA 2022, collocated with AAMAS 2022 (link)
- 25 April, 2022 We had the pleasure of hosting Utku Evci, Research Engineer at Google Brain Montreal, to give a very engaging in-person talk.
- 20 April, 2022 One paper on sparse training and deep reinforcement learning accepted at IJCAI-ECAI 2022 (link)
- 15 April, 2022 Our tutorial "Sparse Neural Networks Training" has been accepted at ECMLPKDD 2022 (link)
- 20 January, 2022 One paper on dynamic sparse training has been accepted at ICLR 2022 (link)
Organisations
Publications
2023
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation. ArXiv.org. Wu, B., Xiao, Q., Liu, S., Yin, L., Pechenizkiy, M., Mocanu, D. C., Keulen, M. V. & Mocanu, E.https://doi.org/10.48550/arXiv.2312.04727Dynamic Sparse Network for Time Series Classification: Learning What to “See”. Xiao, Q., Wu, B., Zhang, Y., Liu, S., Pechenizkiy, M., Mocanu, E. & Mocanu, D. C.https://drive.google.com/file/d/10pxPf2aWTdMumUba_8-7v_jEZ3-K_uV3/viewDynamic Sparse Network for Time Series Classification: Learning What to “See”. Xiao, Q., Zhang, Y., Liu, S., Pechenizkiy, M., Mocanu, E. & Mocanu, D. C.https://drive.google.com/file/d/10pxPf2aWTdMumUba_8-7v_jEZ3-K_uV3/viewAutomatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement LearningIn AAMAS '23: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 1932-1941). ACM Press. Grooten, B., Sokar, G., Dohare, S., Mocanu, E., Taylor, M. E., Pechenizkiy, M. & Mocanu, D. C.https://dl.acm.org./doi/10.5555/3545946.3598862Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning. ArXiv.org. Grooten, B., Sokar, G., Dohare, S., Mocanu, E., Taylor, M. E., Pechenizkiy, M. & Mocanu, D. C.https://doi.org/10.48550/arXiv.2302.06548Enhancing Learning in Sparse Neural Networks: A Hebbian Learning ApproachIn BNAIC/BENELEARN 2023. de Ranitz, A., Beldad, A. D. & Mocanu, E.
2022
Dynamic Sparse Network for Time Series Classification: Learning What to "see''. ArXiv.org. Xiao, Q., Wu, B., Zhang, Y., Liu, S., Pechenizkiy, M., Mocanu, E. & Mocanu, D. C.https://doi.org/10.48550/arXiv.2212.09840Dynamic Sparse Network for Time Series Classification: Learning What to “See”. Xiao, Q., Wu, B., Zhang, Y., Liu, S., Pechenizkiy, M., Mocanu, E. & Mocanu, D. C.https://openreview.net/forum?id=ZxOO5jfqSYwTowards Implementing Truly Sparse Connections in Deep RL Agents. Grooten, B. J., Sokar, G., Mocanu, E., Dohare, S., Taylor, M. E., Pechenizkiy, M. & Mocanu, D. C.Dynamic Sparse Training for Deep Reinforcement LearningIn Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022 (pp. 3437-3443). Sokar, G. A. Z. N., Mocanu, E., Mocanu, D. C., Pechenizkiy, M. & Stone, P.https://doi.org/10.24963/ijcai.2022/477
Research profiles
Courses academic year 2023/2024
Courses in the current academic year are added at the moment they are finalised in the Osiris system. Therefore it is possible that the list is not yet complete for the whole academic year.
- 192199508 - Research Topics CS
- 192199968 - Internship CS
- 192199978 - Final Project CS
- 192399979 - Final Project BIT
- 201300058 - Research Topics BIT
- 201300059 - Internship BIT
- 201300086 - Research Topics 2 CS
- 201400171 - Capita Selecta Software Technology
- 201500371 - Capita Selecta BIT
- 201600017 - Final Project Preparation
- 201600070 - Machine Learning I
- 201600071 - Machine Learning II
- 201800419 - Capita Selecta Comp. Vision & Biometrics
- 201800524 - Research Topics EIT
- 201900194 - Research Topics I-Tech
- 201900195 - Final Project I-Tech
- 201900200 - Final Project EMSYS
- 201900234 - Internship I-Tech
- 202001434 - Internship EMSYS
- 202001521 - Capita Selecta EngD (external course)
- 202001522 - Capita Selecta EngD (in-company tr.)
- 202001613 - MSc Final Project BIT + CS
- 202001614 - MSc Final Project CS + I-Tech
- 202001616 - Research Topics CS + I-TECH
- 202200251 - Capita Selecta DST
- 202300070 - Final Project EMSYS
Courses academic year 2022/2023
- 192166200 - Capita Selecta I-TECH
- 192199508 - Research Topics CS+IST
- 192199968 - Internship CS
- 192199978 - Final Project CS+IST
- 192399979 - Final Project BIT
- 201300058 - Research Topics BIT
- 201300059 - Internship BIT
- 201300086 - Research Topics 2 CS+IST
- 201300294 - Master Thesis SEC Computer Science
- 201400171 - Capita Selecta Software Technology
- 201400174 - Data Science
- 201500363 - Data Science Additional Topics
- 201500371 - Capita Selecta BIT
- 201600017 - Final Project Preparation
- 201600070 - Machine Learning I
- 201600071 - Machine Learning II
- 201800419 - Capita Selecta Comp. Vision & Biometrics
- 201800524 - Research Topics EIT
- 201900194 - Research Topics I-Tech
- 201900195 - Final Project I-Tech
- 201900200 - Final Project EMSYS
- 201900234 - Internship I-Tech
- 202001434 - Internship EMSYS
- 202001613 - MSc Final Project BIT + CS
- 202001614 - MSc Final Project CS + I-Tech
- 202001616 - Research Topics CS + I-TECH
- 202200251 - Capita Selecta DST
- 202200377 - Internship I-Tech / Robotics
- 202200399 - Internship I-Tech / Robotics
In the press
- 2021, Inovex GmbH, AI blog Pruning and Sparsification of Neural Networks
- 2021, Towards Data Science, Neural Network Pruning 101
- 2020, Numenta, The Case For Sparsity in Neural Networks, Part 2: Dynamic Sparsity
- 2019, TechXplore, A bio-inspired approach to enhance learning in ANNs
- 2018, Nature Collections, The multidisciplinary nature of machine intelligence
- 2018, Towards Data Science, The Sparse Future of Deep Learning
- 2018, Phys.org, New AI method increases the power of artificial neural networks
- 2018, E&T magazine, Artificial neural networks could run on cheap computers using new method
- 2018, Technologist, AI method increases the power of artificial neural networks
- 2018, Elektor magazine, Nieuw algoritme versnelt kunstmatige intelligentie kwadratisch
- 2018, EMERCE, Nieuwe AI-methode vergroot de kracht van kunstmatige neurale netwerken
- 2018, deVolkskrant, Ook het stroomnet denkt straks na
Address
![](https://1348661504.rsc.cdn77.org/.uc/iff4689c40103f3eb1100f2c8f403e85637f06a7b284b0801e3bc0268018041/zilverling.jpg)
University of Twente
Zilverling (building no. 11)
Hallenweg 19
7522 NH Enschede
Netherlands
University of Twente
Zilverling
P.O. Box 217
7500 AE Enschede
Netherlands