Boqian is a first-year Ph.D. student at the University of Twente in the Data Management & Biometrics (DMB) group. During her Ph.D., she is focusing on sparse neural networks, specifically the theoretical aspects of such networks. Her goal is to uncover the mechanisms that underlie the practical effectiveness of sparse neural networks.

Expertise

  • Computer Science

    • Classification Learning
    • Online Portfolio
    • Image Segmentation
    • Models
    • Receptive Field
    • Vision Transformer
  • Economics, Econometrics and Finance

    • Portfolio Selection
    • Mean Reversion

Organisations

Dynamic sparse training algorithms have shown promise in achieving high performance while reducing resource costs, making them an attractive option in machine learning. However, despite their potential, the theoretical properties of dynamic sparse training remain largely unexplored. My research aims to fill this knowledge gap by investigating the theoretical properties of sparse training models. As a result, guidelines will be developed for applying dynamic sparse training to real-world problems.

Publications

2024

Are Sparse Neural Networks Better Hard Sample Learners? (2024)In British Machine Vision Conference (BMVC 2024). Xiao, Q., Wu, B., Yin, L., Gadzinski, C. N., Huang, T., Pechenizkiy, M. & Mocanu, D. C.Insights into Dynamic Sparse Training: Theory Meets Practice (2024)[Contribution to conference › Poster] European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) 2024. Wu, B., van Keulen, M., Mocanu, D. C. & Mocanu, E.Robust online portfolio optimization with cash flows (2024)Omega, 129. Article 103169 (E-pub ahead of print/First online). Lyu, B., Wu, B., Guo, S., Gu, J. & Ching, W.-K.https://doi.org/10.1016/j.omega.2024.103169Dynamic Data Pruning for Automatic Speech Recognition (2024)In Interspeech 2024 (pp. 4488-4492). Xiao, Q., Ma, P., Fernandez-Lopez, A., Wu, B., Yin, L., Petridis, S., Pechenizkiy, M., Pantic, M., Mocanu, D. C. & Liu, S.

2023

E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation (2023)[Working paper › Preprint]. ArXiv.org. Wu, B., Xiao, Q., Liu, S., Yin, L., Pechenizkiy, M., Mocanu, D. C., van Keulen, M. & Mocanu, E.https://doi.org/10.48550/arXiv.2312.04727Weighted Multivariate Mean Reversion for Online Portfolio Selection (2023)In Machine Learning and Knowledge Discovery in Databases: Research Track: European Conference, ECML PKDD 2023, Turin, Italy, September 18–22, 2023, Proceedings, Part V (pp. 255-270) (Lecture Notes in Computer Science; Vol. 14173). Wu, B., Lyu, B. & Gu, J.https://doi.org/10.1007/978-3-031-43424-2_16Can Less Yield More? Insights into Truly Sparse Training (2023)[Contribution to conference › Poster] ICLR 2023 Workshop on Sparsity in Neural Networks. Xiao, Q., Wu, B., Yin, L., van Keulen, M. & Pechenizkiy, M.https://drive.google.com/file/d/1kbWZ9ejU9XvtOMRtAcVYmcoRCDIWj3zy/viewDynamic Sparse Network for Time Series Classification: Learning What to “See” (2023)[Contribution to conference › Poster] ICLR 2023 Workshop on Sparsity in Neural Networks. Xiao, Q., Wu, B., Zhang, Y., Liu, S., Pechenizkiy, M., Mocanu, E. & Mocanu, D. C.https://drive.google.com/file/d/10pxPf2aWTdMumUba_8-7v_jEZ3-K_uV3/viewMore convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity (2023)In The Eleventh International Conference on Learning Representations (ICLR 2023). OpenReview. Liu, S., Chen, T., Chen, X., Chen, X., Xiao, Q., Wu, B., Pechenizkiy, M., Mocanu, D. C. & Wang, Z.https://arxiv.org/abs/2207.03620

2022

Dynamic Sparse Network for Time Series Classification: Learning What to "see'' (2022)[Working paper › Preprint]. ArXiv.org. Xiao, Q., Wu, B., Zhang, Y., Liu, S., Pechenizkiy, M., Mocanu, E. & Mocanu, D. C.https://doi.org/10.48550/arXiv.2212.09840

Research profiles

Scan the QR code or
Download vCard