🧑‍💻️ About Me

I was born and raised in Shenzhen, one of the largest cities in China, on December 5, 1998. Currently, I am a final-year PhD student in Computer Science at The Chinese University of Hong Kong, Shenzhen. Before pursuing my PhD degree, I obtained my bachelor’s degree in Statistics from Sun Yat-sen University, Guangzhou, China, in 2020. I love math, coding, and sports.

I also spent some wonderful time interning at SenseTime and Damo Academy, Alibaba.

💬 About My Research

Throughout my PhD studies, I’ve focused on creating new optimization theories and algorithms to tackle challenges in modern machine learning applications, including privacy-preserving machine learning, distributed optimization, and most recently, generative models such as LLMs and Diffusion Models. I’d like to share with you three key mindsets that have guided my research.

(1) Optimization with Imperfect Function Feedback. In many settings, obtaining perfect function feedback, like gradient or value, can be challenging. For example, distributed optimization requires compressing function feedback to improve communication efficiency between devices Tang et al. AAAI 2024, or optimizing with human ranking feedback for AI-generated content Tang et al. ICLR 2024. The high-level idea behind these studies is to accurately estimate the ground-truth gradient from imperfect feedback using statistical techniques, then applying this estimation in optimization.

(2) Learning Your Optimization Problem from Data. Many engineering problems are solved by formulating optimization with manually-constructed models of the target world, including the objective function and constraints. However, these hand-crafted models can be difficult to create and often provide only a crude approximation of the real world due to simplified assumptions. With the rise of generative models like LLMs and Diffusion Models, an exciting approach is to use the learned distribution as world models for optimization, effectively learning constraints for the optimization problem. Similarly, the optimization’s objective function can also be learned from data, such as learning a reward model from human preferences Ouyang et al.. With this approach, it becomes possible to tackle complex and abstract engineering problems such as “generate a beautiful image” by learning an image generative model (acting as the feasible region) and a reward model for evaluating the image’s aesthetic (acting as the objective function) Tang et al. 2024 Preprint.

(3) Useful Optimization Tricks. Proper optimization tricks can lead to unreasonable effectivness in some machine learning applications. For example, entropy-regularized optimal transport can be used for smoothing optimization over permutations Tang et al. UAI 2023, and techniques from solving nonlinear equations can expedite the sampling of diffusion models Tang et al. ICML 2024.

I am open to research collaborations and discussions, as well as opportunities in industry. Please don’t hesitate to contact me!

🔥 News

  • 2024.05:  🎉🎉 Hello there! I’ve just launched my homepage to share my journey in Machine Learning.

📝 Featured Publications

  • “Tuning-Free Alignment of Diffusion Models with Direct Noise Optimization”, Preprint

    Zhiwei Tang, Jiangweizhi Peng, Jiasheng Tang, Mingyi Hong, Fan Wang, Tsung-Hui Chang

    Paper

  • “Accelerating Parallel Sampling of Diffusion Models”, ICML 2024

    Zhiwei Tang, Jiasheng Tang, Hao Luo, Fan Wang, Tsung-Hui Chang

    Paper

  • “Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles”, ICLR 2024

    Zhiwei Tang, Dmitry Rybin, Tsung-Hui Chang

    Paper Code

  • “$z$-SignFedAvg: A Unified Stochastic Sign-based Compression for Federated Learning”, AAAI 2024

    Zhiwei Tang, Yanmeng Wang, Tsung-Hui Chang

    Paper

  • “Low-Rank Matrix Recovery With Unknown Correspondence”, UAI 2023

    Zhiwei Tang, Tsung-Hui Chang, Xiaojing Ye, Hongyuan Zha

    Paper

For my full publication list, please see my google scholar or my CV.

📖 Educations

  • 2020.09 - Present, PhD in Computer Science, The Chinese University of Hong Kong, Shenzhen, China
  • 2016.09 - 2020.07, BS in Statistics, Sun Yat-sen University, Guangzhou, China

💻 Internships

  • 2023.10 - 2024.06, Research Intern, Damo Academy, Alibaba Group, Hangzhou, China.
  • 2019.10 - 2020.07, Research Intern, SenseTime Group, Shenzhen, China.

🔍 Review Service

  • NeurIPS 2021-2024
  • ICLR 2024
  • ICML 2022-2024
  • ICASSP 2022-2023