Iā€™m a first-year Ph.D. student at University of California, Santa Cruz, where I am fortunate to be advised by Prof. Cihang Xie. I received my B.S. degree in Computer Science from Zhejiang University in 2023. Previously, I spent wonderful time at National University of Singapore, Shanghai AI Lab and PhiGent Robotics.

My research interest includes Vision & Language, Foundation Model and Data-centric AI.

I am open for collaborations in research. Also, I am looking for potential research intern positions in the summer of 2025.

šŸ”„ News

  • 2024.11: Ā šŸš€ We release CLIPS, which achieves SOTA results in zero-shot image-text retrieval on MSCOCO and Flickr30K and enhances the visual capability of LLaVA.
  • 2024.09: Ā šŸš€ I join VLAA Lab at UCSC as a Ph.D. student!
  • 2023.11: Ā  New preprint of MLLMs-Augmented Visual-Language Representation Learning, along with code and datasets.
  • 2023.10: Ā  New preprint of the extended version of DREAM (DREAM+), accelerating dataset distillation by 15x.
  • 2023.10: Ā  I arrived in Paris for the ICCV 2023 conference.
  • 2023.08: Ā šŸš€ I join HPC-AI Lab at NUS as a research assistant.
  • 2023.07: Ā šŸŽ‰ DREAM was accepted by ICCV 2023!

šŸ“ Publications

arXiv
sym

CLIPS: An Enhanced CLIP Framework for Learning with Synthetic Captions

Yanqing Liu, Xianhang Li, Zeyu Wang, Bingchen Zhao, Cihang Xie

Github

  • This work introduces two simple yet effective designs to better leverage richly described synthetic captions, achieving state-of-the-art (SOTA) results in zero-shot image-text retrieval on MSCOCO and Flickr30K and enhancing the visual capability of LLaVA.
arXiv
sym

MLLMs-augmented Visual-language Representation Learning

Yanqing Liu, Kai Wang, Wenqi Shao, Ping Luo, Yu Qiao, Mike Zheng Shou, Kaipeng Zhang, Yang You

Github

  • This work demonstrates the potential of Multimodal Large Language Models (MLLMs) to significantly enhance vision-language pre-training.
arXiv
sym

DREAM+: Efficient Dataset Distillation by Bidirectional Representative Matching

Yanqing Liu, Jianyang Gu, Kai Wang, Zheng Zhu, Kaipeng Zhang, Wei Jiang, Yang You

Github

  • This work presents the extended version of DREAM, achieving a remarkable 15x acceleration in dataset distillation.
ICCV 2023
sym

DREAM: Efficient Dataset Distillation by Representative Matching

Yanqing Liu, Jianyang Gu, Kai Wang, Zheng Zhu, Wei Jiang, Yang You

Github

  • This work accelerates dataset distillation by over 8x, significantly improving efficiency.

šŸ“– Educations

  • 2024.09 - now, Ph.D. student, University of California, Santa Cruz.
  • 2019.09 - 2023.06, Undergraduate, College of Computer Science and Technology, Zhejiang Univeristy, Hangzhou.

šŸ’» Internships

šŸ“‚ Activities

  • Reviewer of CVPR 2024, Neurips 2024, AAAI 2025, AISTATS 2025