• 297
    Citations - based on content available in repository [source: Scopus]
20162024

Content available in repository

Personal profile

Quote

Shiwei Liu is a Newton International Fellow at the University of Oxford. He obtained his Ph.D. with the Cum Laude (distinguished Ph.D. thesis) from the Eindhoven University of Technology in 2022. His Ph.D. thesis received the 2023 Best Dissertation Runner-up Award from Informatics Europe.  His research interest is to leverage, understand, and expand the role of sparsity in neural networks, whose impacts span many important topics, such as efficient training/inference/transfer of large-foundation models, robustness and trustworthiness, generative AI and graph learning. 

Fingerprint

Dive into the research topics where Shiwei Liu is active. These topic labels come from the works of this person. Together they form a unique fingerprint.
  • 1 Similar Profiles

Collaborations and top research areas from the last five years

Recent external collaboration on country/territory level. Dive into details by clicking on the dots or
  • Dynamic Data Pruning for Automatic Speech Recognition

    Xiao, Q. (Corresponding author), Ma, P. (Corresponding author), Fernandez-Lopez, A., Wu, B., Yin, L., Petridis, S., Pechenizkiy, M., Pantic, M., Mocanu, D. C. & Liu, S., 2024, Interspeech 2024: Kos, Greece 1-5 September 2024. ISCA, p. 4488-4492 5 p.

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    25 Downloads (Pure)
  • E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation

    Wu, B., Xiao, Q., Liu, S., Yin, L., Pechenizkiy, M., Mocanu, D. C., van Keulen, M. & Mocanu, E., 15 Dec 2024, Proceedings of the 38th Conference on Neural Information Processing Systems, NeurIPS 2024. Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J. & Zhang, C. (eds.). Curran Associates, p. 118483-118512 30 p. (Advances in Neural Information Processing Systems; vol. 37).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    2 Downloads (Pure)
  • Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs Difficult Downstream Tasks in LLMs

    Yin, L., Jaiswal, A., Liu, S., Kundu, S. & Wang, Z. (Corresponding author), 2024, Proceedings of the 41st International Conference on Machine Learning. Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J. & Berkenkamp, F. (eds.). PMLR, p. 57053-57068 16 p. (Proceedings of Machine Learning Research (PMLR); vol. 235).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    8 Downloads (Pure)
  • NeurRev: Train Better Sparse Neural Network Practically via Neuron Revitalization

    Li, G., Yin, L., Ji, J., Niu, W., Qin, M., Ren, B., Guo, L., Liu, S. & Ma, X., 11 May 2024, The Twelfth International Conference on Learning Representations, ICLR 2024 . 16 p.

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    71 Downloads (Pure)
  • Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity

    Yin, L., Wu, Y., Zhang, Z., Hsieh, C.-Y., Wang, Y., Jia, Y., Li, G., Jaiswal, A., Pechenizkiy, M., Liang, Y., Bendersky, M., Wang, Z. & Liu, S. (Corresponding author-nrf), 2024, Proceedings of the 41st International Conference on Machine Learning. Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J. & Berkenkamp, F. (eds.). PMLR, p. 57101-57115 15 p. (Proceedings of Machine Learning Research (PMLR); vol. 235).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    1 Citation (Scopus)
    107 Downloads (Pure)